text
stringlengths
16
1.15M
label
int64
0
10
international journal computational sciences applications ijcsa october integrating fuzzy ant colony system fuzzy vehicle routing problem time windows sandhya department computer science engineering maharishi markandeswar university ambala haryana india abstract paper fuzzy vrptw uncertain travel time considered credibility theory used model problem specifies preference index desired travel times reach customers fall time windows propose integration fuzzy ant colony system based evolutionary algorithm solve problem preserving constraints computational results certain benchmark problems short long time horizons presented show effectiveness algorithm comparison different preferences indexes obtained help user making suitable decisions keywords vrptw fuzzy sets credibility theory ant colony system uncertainty stochastic simulation transportation one important component logistics efficient utilization vehicles directly affects logistic cost money spent transportation moreover proper utilization vehicles also important environmental view point use automated route planning scheduling lead huge savings transportation cost ranging efficient routing fleets crucial issue companies formulated vehicle routing routing problem first introduced danting rameser till many variants problem proposed vehicle routing problem time windows one flavor vrp vrptw least cost route central depot customers designed way customers visited homogenous vehicles preserving capacity time window constraints problem assumed time travel one customer another equivalent distance customers moreover types problems parameters assumed deterministic real life scenarios assumptions donot hold varying road conditions link failures rush hours congestion etc lead uncertainties data algorithms developed deterministic problems work situations paper vrptw fuzzy travel time considered uncertainty handled using fuzzy logic fuzzy travel time represented triangular fuzzy number international journal computational sciences applications ijcsa october credibility theory used model problem improved ant colony system iacs used find efficient route problem main goal paper develop algorithm provide user route minimum distance uncertain travel time desired preference index rest paper organized review presented section section discusses essential basics fuzzy theory model based credibility theory proposed section section presents improved acs used solve model results based experimentations discussed section finally section presents conclusions scope future work review vrp concerned finding minimum set routes starting ending central depot homogenous vehicles serve number customers demands goods capacity constraint taxonomy vrp found vrptw considers time window customer service provided vehicle arrive starting time window arrive closing time window taxonomy found problem many exact metaheuristics algorithms proposed solving vrptw categorized bibliography metaheuristics extensions found became infeasible dynamic mixed integer linear programming approach nearest neighbor branch cut algorithm used solve problem time dependent model follow fifo proposed tabu search approach solve tdvrptw customers characterized soft time windows model presented satisfies fifo another approach donati use multi ant colony local search improvement approach update slack feasible time delays travel times analyzed discretizing time space thus satisfying fifo property however models fail uncertainties arises various detailed summary various uncertain parameters vrptw found lack data due extreme complexity problem requires subjective judgment fuzzy set theory provides meaningful methodologies handle uncertainty vagueness triangular fuzzy numbers used represent fuzzy travel time route construction method proposed solve fuzzy optimization model using imperialist competitive algorithm presented fuzzy vehicle routing problem time window erbao use fuzzy credibility theory model vehicle routing problem fuzzy demand uses integration stochastic simulation differential evolution algorithms solve model however requires lot parameters taken care zheng baodinf liu present integration fuzzy simulation genetic algorithms design hybrid intelligent algorithm solving fuzzy vehicle routing model two new types credibility programming models including fuzzy chance constraint programming fuzzy goal programming presented model fuzzy vrp fuzzy travel time fuzzy concepts genetic algorithm used solution vrp finally proposes grasp meta heuristic solve vrptw travel time uncertain chance constraint model build using credibility approach solve problem however proposed algorithm appears inefficient grasp metaheuristic restart independent previous hand stochastic element aco allows build international journal computational sciences applications ijcsa october variety different paper vrptw fuzzy travel time considered credibility theory used build chance constraint programming model problem improved ant colony metaheuristic used obtain optimal routes using approach decision maker also evaluate different planning scenarios choose best alternative desired confidence level credibility theory section basic concepts results fuzzy measure theory summarized term fuzzy logic introduced lofti zadeh contrast conventional logic fuzzy logic allows intermediate values defined conventional evaluations like true false fuzzy numbers numbers possess fuzzy properties paper represented fuzzy travel time two customers triangular fuzzy number triangular fuzzy number represented triplet mean value mode left right extremes spread membership function zadeh proposed concept possibility measure fuzzy variables counterpart probability theory crisp fuzzy set fuzzy variable possibility measure necessity theory available proposed credibility average possibility necessity possibilityof event measured favorable cases contrast probability event favorable cases measured let nonempty set power set element called event also empty set order present axiomatic definition possibility necessary assign number pos event indicates possibility occur axiom pos normality axiom axiom pos non negativity axiom axiom pos maximality axiom figure shows possibility fuzzy event pos international journal computational sciences applications ijcsa october figure possibility fuzzy event explicit expression possibility pos given necessity event defined impossibility complement event nec fig shows necessity fuzzy event nec figure necessity fuzzy event explicit expression possibility nec given credibility event pos credibility measure signifies credibility solution satisfies constraints explicit expression credibility case triangular fuzzy measure constraint model fuzzy vrptw define fuzzy vrptw follows given set geographically distributed customers requiring services within specific time period set homogenous vehicles stationed central depot known demands customers uncertain travel time customers lying within known ranges international journal computational sciences applications ijcsa october likely values known objective fuzzy vrptw find route minimum distance specified confidence level meet customers time windows requirements assuming vehicle starts ends tour central depot indexed vehicle fixed identical capacity customer visited one vehicle customer predefined time window served demand customer fixed assumedthat exceed vehicle capacity travel time customer fixed assumed expressed triangular fuzzy number let departure time vehicle customer arrival time summation departure time previous customer fuzzy travel time node time begin service maximum arrival time opening time fuzzy travel time arrival time time begin service next customer also fuzzy however service time opening time windows crisp numbers special case triangular fuzzy numbers three defining numbers equal obtain credibility time begin service next customer exceed close time know travel time next customer smaller closing time chances serving customer vehicle grow greater difference closing time maximum travel time greater chance serve customer difference small customer may served therefore preference index solution verifies fuzzy constraint service times within corresponding time window goal determine value result route minimum distance stochastic simulation done thus objective corresponding chance constraint model fuzzy vrptw using credibility theory follows international journal computational sciences applications ijcsa october subject following indices notations considered customer indexes denoting base station vehicles demand customer capacity vehicle cij cost moving node expressed terms distance customer sij service time customer time window customer tij fuzzy travel time wtiis waiting time customer fuzzy time begin service binary variable introduced vehicle travels directly customer customer otherwise objective function seeks minimize total traveled distance whereas constraint specifies every customer visited one vehicle splitting deliveries states tour starts ends depot indexed capacity constraint preserved ensure demand customer exceeds vehicle preserves constraints ensuring sum fuzzy travelling time customer service time waiting time less closing time customer window preserves time begin service route within specified preference index solution approach section first stochastic simulation used calculate total distance ant colony heuristic used obtain least cost route plan best value dispatcher preference index international journal computational sciences applications ijcsa october simulation additional distance paper travel time customer uncertain represented triangular fuzzy numbers algorithms deterministic problems applied fuzzy vrptw moreover real values travel time known reaching customer however uncertain travel time considered deterministic stochastic simulation summarize algorithm follows step simulate actual travel time customer following process step generate random number within left right boundaries triangular fuzzy travel time customer calculate membership step generate random number step compare use actual travel time otherwise generate compare satisfy condition step calculate total distance moving along planned route step repeat step times calculate average total distance construction using ant colony optimization obtain best solution enhance ant colony optimization applied algorithm works step initialize pheromone matrix place ants depot set unvisited customers step repeat marked visited step start ant mark depot current location step current location choose customer visited pseudorandom proportional rule given wheren uniformly generated random number visibility customer defined cost travelling node distance node case travel time uncertain values known reaching customers may possible vehicle arrives acustomer serve expiry time window international journal computational sciences applications ijcsa october waiting time location service started difference latest arrival time ljand actual arrival time customer measure urgency customer served customer selected according probability given also set customers successfully visited current location vehicle without violating following time capacity constraints time constraints arrival customer must closing time customer capacity constraints step mark current location update set unvisited also update capacity current time ant step current location find else repeat step paths find best path followed minimum total travelled distance try improving route applying local search step update pheromone matrix global pheromone updation rule constant controls speed evaporation pheromone number routes current best solution deposited pheromone links given constant tour length current solution step repeat step desired number iteration provide best obtained solution experience proposed algorithm encoded matlab study uses dataset generated example zheng liu two types international journal computational sciences applications ijcsa october experimental conditions based time windows generated assume customers short time horizons long time horizons customer labeled assumed depot experiment demand customer distance fuzzy travel time customer start time closing time customer long time horizon assumed dataset short time period total opening duration assumed customer relative parameters values used implementation listed table simulation parameters simulation parameter parameter value number customers number iterations capacity vehicles initial pheromone value arcs ants credibility depot close time service time comparison results obtained algorithm zheng various parameters presented table international journal computational sciences applications ijcsa october table comparison table factors zheng approach meta heuristics genetic grasp ant colony system total iterations time consumed hours minute minutes vehicle used total distance vehicle routes loads vehicles robust less one observe algorithm produces effective results zheng comparable results grasp moreover proposed algorithm robust terms utilization vehicles evaluate importance dispatcher preference varied interval step average computational results times given fig aggregated cost influence prefrence index fig influence preference index international journal computational sciences applications ijcsa jcsa october figure conclude decision maker risk lover choose lower values crr whereas risk adverse higher values plan higher cost figure shows effect fuzzy travel time time windows long duration problems confidence level time missed time windows customers fig missed time windows long horizon problems one note travelling fuzzy sspeed given customers served within time windows short horizon problems customers suffered shown fig example customer closing time window around actual arrival time vehicle around time missed time windows customers fig missed time windows short problems set every customer served within itss time boundaries shown fig international journal computational sciences applications ijcsa jcsa october time missed time windows customers fig missed time windows short problems deterministic assumptions vrptw make unsuitable real world environment paper vrptw fuzzy travel time considered capture real life scenario scenario goal construct efficient reliable routes problem propose chance constrained model problem using fuzzy credibility theory fuzzy travel time represented triangular fuzzy number additionally stochastic tochastic simulation done get fuzzy travel time ant colony optimization algorithm used get optimal solution problem reasonable time apply solution approach problems short long duration time windows concluded proposed approach performs well structure problem provides improved results comparable results comparison different confidence levels done show influence total distance concluded higher values preference index leads higher cost comparison helps decision maker choosing different values confidence level get different results based cheapness robustness robustness references gendreau michel metaheuristics vehicle routing problem extensions categorized bibliography springer eksioglu burak arifvolkanvural arnold reisman vehicle routing problem taxonomic review computers industrial engineering gupta radha bijendra singh dhaneshwar pandey multi objective fuzzy vehicle routing problem casee study international journal contemporary mathematical sciences international journal computational sciences applications ijcsa october bansal sandhya rajeev goel mohan use ant colony system solving vehicle routing problem time window constraints proceedings second international conference soft computing problem solving socpros december springer india soo raymond kuo yang yong haurtay survey progress research vehicle routing problem time window constraints symposium progress information communication technology spict laporte gilbert fifty years vehicle routing transportation science laporte nobert exact algorithms vehicle routing combinatorial optimization amsterdam baldacci bartolini mingozzi roberti exact solution framework broad class vehicle routing problems computational management time lysgaard jens clarke wright savings algorithm department management science logistics aarhus school business olli michel gendreau metaheuristics vehicle routing problem time windows report malandraki chryssi mark daskin time dependent vehicle routing problems formulations properties heuristic algorithms transportation science choua soumia michel gendreau potvin vehicle dispatching travel times european journal operational donati alberto time dependent vehicle routing problem multi ant colony system european journal operational research demirel tufan nihan cetin demirel belgintasdelen time dependent vehicle routing problem fuzzy traveling times different traffic conditions journal logic soft computing wang zhang chen novel algorithm solve vehicle routing problem time windows imperialist competitive algorithm advances information sciences service sciences erbao cao lai mingyong hybrid differential evolution algorithm vehicle routing problem fuzzy demands journal computational applied mathematics zheng yongshuang baoding liu fuzzy vehicle routing model credibility measure hybrid intelligent algorithm applied mathematics computation goncalves gilles tiente hsu jianxu vehicle routing problem time windows fuzzy demands approach based possibility theory international journal advanced operations management brito julio moreno verdegay fuzzy optimization vehicle routing problems conf brito fuzzy vehicle routing problem time windows proceedings ipmu vol zadeh fuzzy sets information control kaufman introduction theory fuzzy subsets academic press new york zadeh fuzzy sets basis theory possibility fuzzy sets systems liu uncertain theory introduce axiomatic foundations springer berlin gemeinschaften white transport policy time decide office official publications european communities dantzig george john ramser truck dispatching problem management science
9
prediction control projectile impact point using approximate statistical moments oct cenk abhyudai paper trajectory prediction control design desired hit point projectile studied projectiles subject environment noise wind effect measurement noise addition mathematical models projectiles contain large number important states taken account realistic prediction furthermore dynamics projectiles contain nonlinear functions monomials sine functions address issues formulate stochastic model projectile showed set transformations projectile dynamics contains nonlinearities form monomials next step derived approximate moment dynamics system using approximation method still suffers size system address problem selected subset statistical moments showed give reliable approximations mean standard deviation impact point real projectile finally used selected moments derive control law reduces error hit desired point ntroduction broad sense projectile ranged weapon moves air presence external forces endures motion inertia begining aristotle later galileo projectile trajectory prediction impact points studied goal studies provide realistic mathematical model order explain behavior projectiles mathematical modeling allows understand motion projectiles requiring much lower cost experimental analysis however mathematical models become rapidly convoluted parameters system considered atmospeheric conditions air density fuel hence forming optimal model contains critical information projectile essential various approximations methods assumptions used obtain reliable models paper define novel model predict impact points ballistic targets predicting points essential order avoid hitting constrained areas create effective defense systems points order reach reasonable impact point prediction ipp several deterministic stochastic approaches studied various types kalman filters maximum likelihood estimator stochastic model predictive control recent works also included demir department electrical computer engineering university delaware newark usa cdemir singh department electrical computer engineering biomedical engineering mathematical sciences center bioinformatics computational biology university delaware newark usa absingh wind effect crucial obtaining realisctic ipp however highly inaccessible receive information wind instantaneously hastily limited sensor accuracy furthermore deviations wind speed direction random makes unrealizable predict future evolution wind noise also shifts impact points projectiles makes classic ipp model erroneous paper effect wind modelled stochastic process presence randomness statistical moments reliable tools give useful information mean variance impact points however always possible determine statistical moments highly nonlinear systems unclosed moment dynamics means higher order moments appear lower ones interpret moments case various approximation techniques used called closure techniques see unfortunately methods mainly developed deal higher order moments monomial form instance approximate skewness function mean variance however ipp contains nonlinearities trigonometric form due nature show using euler formula transform system new coordinates transformed system modelled classic monomial form change variables next step derive moment dynamics transformed system dynamics free trigonometric functions yet still unclosed hence apply approximation close approximation gives reliable results limit weak correlation states system case due presence independent noise terms different states ipp ultimate aim aerospace studies control projectile around desired trajectory named projectile guidance many guidance laws developed propotional navigation guidance png used well performed target stationary png applied projectile changing forces act changes obtained using configurations canards wings tails change configurations achieved various control strategies paper used feedback control move along projectile desired trajectory next start analysis defining projectile dynamic models xact dynamic odel rojectile model projectile motion first need define proper coordinates projectile dynamics contain two frames one table summary notation used parameter cna clp uci vci wci rxi ryi rzi description location projectile roll pitch yaw angles angular velocity components projectile velocity components force components projectile physical moment components projectile inertia matrix sine cosine tangent total velocity projectile characteristic length atmospheric density normal force aerodynamic coefficient wind components roll pitch damping aerodynamic coefficients distance center mass center magnus pressure distance center mass center pressure magnus force aerodynamic coefficient zero yaw axial force aerodynamic coefficient white noise components angle attack side slip angle ith canard velocity ith canard characteristic length ith canard surface area ith canard lift drag force canard aerodynamic lift drag coefficients ith canard mach number ith canard angle ith canard distance center gravity yaw pitch angle error control gains control inputs inertial frame often construed earth coordinate frame one projectile referenced frame body frame frames position states denoted orientation states denoted note position orientation independent parameters called six degrees freedom also seen figure position states give information location projectile earth coordinate portrayed euler angles transformation transformation axes measured euler angle derivatives reason necessary define matrix allow separate euler angle derivatives orthonormal components thus taking inverse define dynamics body velocity components depend acting force projectile explicitly gravitational force aerodynamic steady state force aerodynamic magnus force canard lifting force velocity components moreover angular velocity depends physical moments acting projectile moments aerodynamic steady state moment aerodynamic unsteady moment aerodynamic magnus moment moment canard connection angular velocity physical moment given equation shows position states depend body translational velocities furthermore depend angles projectile earth frame origin figure matrix equation known rotation matrix matrix created standard aerospace rotation sequence allows represent speed projectile directions earth coordinate system orientation states projectile known euler angles called specifically roll pitch yaw inertial angular rates time derivative impact point fig model schematic projectile inertial coordinate frame projectile fully identified position inertial coordinates orientation knowing degrees freedom one predict impact points point projectile hits inertia matrix ixx izx projectile ixy izy ixz izz force moment terms equations defined following gravitational force force pulls projectile towards center earth aerodynamic force physical moment effect pressure shear stress body projectile force split lift force opposite gravitational force drag force perpendicular gravitational force opposite lateral velocity projectile definitions aerodynamic moment constructed point body using forces addition canard lifting force moment aerodynamic force moment canard surface last phenomena projectile magnus force moment seen spinning projectiles pressure difference opposite sides spining projectile creates force moment equations full description dynamics projectile clear dynamics highly nonlinear hence different variations model used considering different assumptions next section briefly review current approximations iii inear odel odified inear odel rojectile order decrease complexity projectile dynamics researchers came different assumptions one famous framework built based series simplifying assumptions known projectile linear theory theory allows define analytic solution projectile however usually generates considerable error impact point prediction overcome issue projectile modified linear model interpreted relaxing assumptions specifically pitch angle small sin cos projectile roll rate pitch angle assumed constant two changes projectile modified linear model cos cos cos sin cos sin cna cos cna cdd clp iyy cna ixx iyy iyy cna ixx iyy note paper superscript used describe components fixed frame projectile models deterministic however reality motion projectiles deterministic noise components calls new model projectile next section introduce new stochastic model projectile stochastic noise effect tochastic odified inear odel rojectile start introducing general stochastic differential equation sde typical equation form sde written state vector system dynamics wiener process whose mean zero projectiles contain miscellaneous physical components subject noise various noise sources include measurement noise sensor noise wind effect etc consistent previous studies model noise terms state independent white noise cos cos cos sin constant hdwi dwi hdwi dwj rest paper denotes expected value understood effect noise different coordinates figure modifications dynamics projectile form sde equation thus call model projectile stochastic modified linear model psmlm statistical moment dynamics psmlm moment dynamics general sde form equation derived using formula start analysis writing first statistical moment dynamics dhx sum order moment linear moment dynamics written compactly contains moment system order vector matrix determined using solve linear equation effortless desired moment order always combination higher order moments however nonlinear simple determine moments needs new configuration contains higher moments desired ones problem fundamental problem statistical moments determination nonlinear overcome fundamental problem used closure technique approach convenient systems computational complexity basic idea moment closure technique define higher order moments product moments individuals instance hxm hxi ihxj dhxi moment dynamics depend second order moment dynamics next step add dynamic two moment system iyy cna ixx iyy iyy cna ixx iyy dynamics depend third order moments moment dynamics system closed sense close set moments depends higher order moments overcome issue closure technique used order calculate mean need add second order moment dynamics next approximation mean psmlm using handy use find moments system example state cos dynamic nonlinear cosine term use euler relation sin thus rewritten cos proceed define two new states fig approximation technique captures mean behavior projectile bold lines show mean trajectory projectile obtained numerical simulations approximation approximation results indistinguishable numerical results showing significant performance method standard deviation data standard deviation standard deviation data standard deviation impact points second order moments psmlm system states means exist first order statistical moments second order statistical moments number equations make running different methods finding optimal solutions real time impossible hence consider number second order moments approximate rest functions first order moments namely add equations form sole state instance moreover include moments show moment dynamics coordinates rest moments approximated fig method successfully predicts standard deviation impact points every cross represents impact point projectile noise component blue red ellipse standard deviation coordinates using approximation data simulations respectively applying closures closed set first order moment dynamics selected second order moments psmlm solving equations give approximate time trend mean standard deviation imulation esults rojectile rediction sing ean field paper fin stabilized projectile used following initial conditions location origin reference frame speed terms considered angles reference coordinates system rad rad rad angular velocities fig randomness starting point increase errors impact points hence vitiates accuracy missile mean field approximation capable giving accurate estimations error even presence random initial conditions physical parameters selected air density gravity constant rest physical parameters chosen reference diameter weight projectile slug aerodynamic coefficients cdd clp cna cypa cmq distances center mass aerodynamic moment force rmcp rmcm moments inertia ixx iyy addition wind speed taken weather cast agencies used using initial conditions parameters simulated psmlm result simulation seen figure clear figure approximation able predict mean behavior projectile time successfully next analyzed impact points figure shows method able predict standard deviation impact points around mean small error table initial values probability distribution parameter mean parameter mean moreover reality initial condition may change randomness nature wind topology terrain address uncertainty projectile prediction performed distribution initial conditions implemented source uncertainty numerical simulations choosing random initial canard esign rojectile ontrol vci wci drag force canard vci wci canard canards different locations two located parallel missile body force terms two xci yci zci wci uci uci uci two located parallel missile body zdirection force terms xci yci zci canard pitch yaw fig controller schematic following desired projectile changing angle canards control process starts getting information current states projectile states subtracted desired states find location error according error yaw pitch angle errors calculated using velocity states yaw pitch angle errors states used control system feedback canards angle updated using output controller angle attack side slip angle mach number aerodynamic coefficients depend mach number ratio speed sound speed air vehicles addition angle attack canard depends location canard projectile angle attack canard angle defined wci uci coefficients defined canard section focus controlling projectile apply control four canards used whose characteristics exactly canards allow adjust forces physical moments act projectile controlling forces moments generally obtained manipulating canard angles design feedback control subjective following desired trajectory trajectory minumum flight time using angles next define canard properties feedback control law details four symmetric canards added projectile lift force canard projectile dynamics controller condition drawn distribution introduced table figure shows method still capable predicting impact points expected standard deviation increases presence stochastic starting point vci uci uci uci moreover physical moments canards defined ryi lci mci rzi yci zci nci rxi rxi ryi rzi distance center gravity canard direction forces moments added system dynamic model directly design controller projectile using feedback control law desired trajectory required paper assume desired trajectory error terms desired trajectory actual trajectory difference along coordinates according error terms yaw pitch angle errors described tan tan angle attack tan side slip angle using errors control law feedback control gains finally control applied updating angle canard entire control design schematic shown figure vii imulation esults mplementing ontroller assumed aerodynamic coefficients canards constant also distance canards center gravity described following table iii distance center gravity canards parameter value parameter value parameter value usage identical canards canard wing areas equal velocities canards angular rates position canard however velocities calculated considering location canards angular velocities iteration control process selected control gain parameters using control parameters change standard deviation impact points shown figure control law successfully reduced variance impact points hence increases reliability accuracy missile however error simulation results approximation small negligible one way reduce error add moment dynamics analysis future work quantify bounds error estimation find optimal number dynamics needed added reach desired error bounds approximation error moments recently developed simple dynamic systems standard deviation standard deviation impact points uncontrolled standard deviation fig controlling projectile standard deviation impact points location reduces considerably control law successfully rejected contribution noise made projectile follow desired path results lower deviation impact point viii onclusion paper used sdes model projectiles noise effect next applied euler formula deduce nonlinearities trigonometric monomial form employ approximation obtain closed form equations describing mean standard deviation system approximation gives reliable results predicting time evolution projectile characteristics impact points finally proposed control scheme reduce errors impact points furthermore aim projectile hit exact target point also evades hit constrained areas purpose skewness kurtosis used avoid hitting areas changing shape distribution impact points research study higher order moments projectile skewness kurtosis finally work assumed controller built projectile sometimes need give new control law projectile transmission channel prospect research merge dynamics projectile random discrete transmission events modelled renewal transitions address requirement acknowledgment supported national science foundation grant eferences hussey physics books iii clarendon aristotle series vol naylor galileo theory projectile motion isis vol mavris delaurentis bandte hale stochastic approach aircraft analysis design aerospace sciences meeting exhibit reno charters linearized equations motion underlying dynamic stability aircraft spinning projectiles symmetrical missiles guidos cooper linearized motion projectile subjected lateral impulse journal spacecraft rockets vol zhao jia attitude stabilisation class stochastic spacecraft systems iet control theory applications vol rogers stochastic model predictive control guided projectiles impact area constraints journal dynamic systems measurement control vol finn value area defense prediction defense perfect attackers defenders final report institute defense analyses alexandria usa tech fieee immediata timmoneri meloni vigilante comparison recursive batch processing impact point prediction ballistic targets ieee international radar conference farina timmoneri vigilante classification launchimpact point prediction ballistic target via multiple model maximum likelihood estimator ieee conference radar yuan willett hardiman impact point prediction thrusting projectiles presence wind ieee transactions aerospace electronic systems vol yanushevsky modern missile guidance crc press manwell mcgowan rogers wind energy explained theory design application john wiley sons soltani singh conditional moment closure schemes studying stochastic dynamics genetic circuits ieee transactions biomedical circuits systems vol lee kim kim moment closure method stochastic reaction networks journal chemical physics vol gillespie approximations models iet systems biology vol kuehn moment closurea brief review control selforganizing nonlinear systems springer singh hespanha approximate moment dynamics chemically reacting systems ieee transactions automatic control vol tsarenko yaglom modeling turbulent diffusion atmospheric surface layer springer new york ghusinga soltani lamperski dhople singh approximate moment dynamics polynomial trigonometric stochastic systems arxiv preprint zhang yang zhan analysis guidance law performance radar guided missile journal projectiles rockets missiles guidance vol metz terminal guidance method guided missile operating according method patent online available https wang cao wang stochastic sliding mode variable structure guidance laws based optimal control theory journal control theory applications vol gross costello impact point model predictive control projectile instability protection proceedings institution mechanical engineers part journal aerospace engineering vol costello potential field artillery projectile improvement using movable military academy west point tech mccoy modern exterior ballistics launch flight dynamics symmetric projectiles schiffer amoruso euler angles quaternions six degree freedom simulations projectiles army armament research development engineering center picatinny arsenal armament engineering directorate tech fresconi celmins silton theory guidance flight control high maneuverability projectiles army research lab aberdeen proving weapons materials research directorate tech stengel exploring flight envelope princeton university press costello extended range gun launched smart projectile using controllable canards shock vibration vol cook flight dynamics principles linear systems approach aircraft stability control anderson fundamentals aerodynamics tata education etkin dynamics atmospheric flight courier corporation murphy free flight motion symmetric missiles army ballistic research lab aberdeen proving ground tech cooper costello flight dynamic response spinning projectiles lateral impulsive loads journal dynamic systems measurement control vol hainz costello modified projectile linear theory rapid trajectory prediction journal guidance control dynamics vol yuan willett hardiman amrdec impact point prediction short range thrusting projectiles proc spie vol vol hutchins san jose imm tracking theater ballistic missile boost phase spie proceedings signal data processing small targets vol tracking prediction boost vehicle position vol maley optimal estimation two process models measurements army research lab aberdeen proving ground weapons materials reseach directorate tech hespanha singh stochastic models chemically reacting systems using polynomial stochastic hybrid systems international journal robust nonlinear control vol bobbio gribaudo telek mean field methods performance analysis fifth international conference quantitative evaluation systems qest chibbaro minier stochastic modelling polydisperse turbulent flows stochastic methods fluid mechanics springer vrettas opper cornford variational meanfield algorithm efficient inference large systems stochastic differential equations physical review vol ollerenshaw costello model predictive control direct fire projectile equipped canards journal dynamic systems measurement control vol rogers costello cooper design considerations stability liquid payload projectiles journal spacecraft rockets vol ghusinga lamperski singh exact lower upper bounds stationary moments stochastic biochemical systems physical biology vol lamperski ghusinga singh stochastic optimal control using semidefinite programming moment dynamics decision control cdc ieee conference ieee soltani singh stochastic analysis linear systems renewal transitions american control conference acc control design analysis stochastic network control system arxiv analysis stochastic hybrid systems renewal transitions automatica vol
3
fast estimation median covariation matrix application online robust principal components jul analysis cardot antoine institut bourgogne bourgogne rue alain savary dijon france july abstract geometric median covariation matrix robust multivariate indicator dispersion extended without difficulty functional data define estimators based recursive algorithms simply updated new observation able deal rapidly large samples high dimensional data without obliged store data memory asymptotic convergence properties recursive algorithms studied weak conditions computation principal components also performed online approach useful online outlier detection simulation study clearly shows robust indicator competitive alternative minimum covariance determinant dimension data small robust principal components analysis based projection pursuit spherical projections high dimension data illustration large sample high dimensional dataset consisting individual audiences measured minute scale period hours confirms interest considering robust principal components analysis based median covariation matrix studied algorithms available package gmedian cran keywords averaging functional data geometric median online algorithms online principal components recursive robust estimation stochastic gradient weiszfeld algorithm introduction principal components analysis one useful statistical tool extract information reducing dimension one analyze large samples multivariate functional data see jolliffe ramsay silverman dimension sample size large outlying observations may difficult detect automatically principal components derived spectral analysis covariance matrix sensitive outliers see devlin many robust procedures principal components analysis considered literature see hubert huber ronchetti maronna popular approaches probably minimum covariance determinant estimator see rousseeuw van driessen robust projection pursuit see croux croux robust pca based projection pursuit extended deal functional data hyndman ullah bali adopting another point view robust modifications covariance matrix based projection data onto unit sphere proposed locantore see also gervini taskinen consider work another robust way measuring association variables extended directly functional data based notion median covariation matrix mcm defined minimizer expected loss criterion based norm see kraus panaretos first definition general setting seen geometric median see kemperman particular hilbert spaces square matrices operators functional data equipped frobenius norm mcm non negative unique weak conditions shown kraus panaretos also eigenspace usual covariance matrix distribution data symmetric second order moment finite spatial median particular hilbert space matrices mcm also robust indicator central location among covariance matrices breakdown point see kemperman maronna well bounded gross sensitivity error see cardot aim work twofold provides efficient recursive estimation algorithms mcm able deal large samples high dimensional data recursive property algorithms naturally deal data observed sequentially provide natural update estimators new observation another advantage compared classical approaches recursive algorithms require store data secondly work also aims highlighting interest considering median covariation matrix perform principal components analysis high dimensional contaminated data different algorithms considered get effective estimators mcm dimension data high sample size large weiszfeld algorithm see weiszfeld vardi zhang directly used estimate effectively geometric median median covariation matrix dimension sample size large static algorithm requires store data may inappropriate ineffective show algorithm developed cardot geometric median hilbert spaces adapted estimate recursively simultaneously median well median covariation matrix averaging step polyak juditsky two initial recursive estimators median mcm permits improve accuracy initial stochastic gradient algorithms simple modification stochastic gradient algorithm proposed order ensure median covariance estimator non negative also explain eigenelements estimator mcm updated online without obliged perform new spectral decomposition new observation paper organized follows median covariation matrix well recursive estimators defined section section almost sure quadratic mean consistency results given variables taking values general separable hilbert spaces proofs based new induction steps compared cardot allow get better convergence rates quadratic mean even new framework much complicated two averaged non linear algorithms running simultaneously one also note techniques generally employed deal two time scale robbins monro algorithms see mokkadem pelletier multivariate case require assumptions rest taylor expansion finite dimension data restrictive framework section comparison classic robust pca techniques made simulated data interest considering mcm also highlighted analysis individual audiences large sample high dimensional data dimension analyzed reasonable time classical robust pca approaches main parts proofs described section perspectives future research discussed section technical parts proofs well description weiszfeld algorithm context gathered appendix population point view recursive estimators let separable hilbert space example closed interval denote inner product associated norm consider random variable takes values define center follows arg min kxk solution often called geometric median uniquely defined broad assumptions distribution see kemperman expressed follows assumption exist two linearly independent unit vectors var distribution symmetric around zero admits first moment finite geometric median equal expectation note however general definition require assume first order moment kxk finite since kxk kuk geometric median covariation matrix mcm consider special vector space denoted matrices general separable hilbert spaces vector space linear operators mapping denoting orthonormal basis vector space equipped following inner product bif haej bej also separable hilbert space equivalently bif transpose matrix induced norm well known frobenius norm also called norm denoted finite second order moments expectation covariance matrix defined minimum argument elements belonging functional note general hilbert spaces inner product operator understood operator mcm obtained removing squares previous function order get robust indicator covariation define median covariation matrix denoted defined minimizer elements second term side prevents introduce hypotheses existence moments introducing random variable takes values mcm unique provided support concentrated line assumption rephrased follows assumption exist two linearly independent unit vectors var remark assumption assumption strongly connected indeed assumption holds var consider rank one matrices strictly positive variance distribution atom generally var unless scalar assuming also furthermore deduced easily mcm geometric median particular hilbert spaces operators robust indicator breakdown point see kemperman bounded sensitive gross error see cardot also assume assumption constant assumption implicitly forces distribution atoms likely satisfied dimension data large see chaudhuri cardot discussion note could weakened cardot allowing points necessarily different mcm strictly positive masses considering particular case assumption implies restrictive dimension equal larger assumption functional twice differentiable gradient hessian operator identity operator elements belonging furthermore also defined unique zero non linear equation remarking previous equality rewritten follows clear bounded symmetric non negative operator stated proposition kraus panaretos operator important stability property distribution symmetric finite second moment indeed covariance operator well defined case share eigenvectors eigenvector corresponding eigenvalue non negative value important result means gaussian generally symmetric distribution finite second order moments covariance operator median covariation operator eigenspaces note also conjectured kraus panaretos order eigenfunctions also efficient recursive algorithms suppose copies random variables law simplicity temporarily suppose median known consider sequence learning weights define recursive estimation procedure follows algorithm seen particular case averaged stochastic gradient algorithm studied cardot indeed first recursive algorithm stochastic gradient algorithm generated whereas final estimator obtained averaging past values first algorithm averaging step see polyak juditsky computation arithmetical mean past values slowly convergent estimator see proposition permits obtain new efficient estimator converging parametric rate asymptotic variance empirical risk minimizer see theorem cases value unknown also required estimate median build estimator possible estimate simultaneously considering two averaged stochastic gradient algorithms running simultaneously averaged recursive estimator median controlled sequence descent steps learning rates generally chosen follows tuning constants satisfy note construction even non negative may non negative matrix learning steps satisfy projecting onto closed convex cone non negative operators would require compute eigenvalues time consuming high dimension even rank one perturbation see cardot degras consider following simple approximation projection consists replacing descent step thresholded one pos min ensures remains non negative non negative use modified steps initialization recursive algorithm non negative matrix example ensure non negative online estimation principal components also possible approximate recursively eigenvectors unique sign associated largest eigenvalues without obliged perform spectral decomposition new observation many recursive strategies employed see cardot degras review various recursive estimation procedures eigenelements covariance matrix simplicity accuracy consider following one kuj combined orthogonalization deflation recursive algorithm based ideas developed weng related power method extracting eigenvectors assume first eigenvalues distinct estimated eigenvectors uniquely determined sign change tend eigenvectors computed possible compute principal components well indices outlyingness new observation see hubert review outliers detection multivariate approaches practical issues complexity memory recursive algorithms require elementary operations update additional online estimation given eigenvectors associated largest eigenvalues additional operations required orthogonalization procedure requires elementary operations note use classical algorithms estimating mcm see fritz envisaged high dimensional data since computation approximation hessian matrix would require elementary operations well known fast weiszfeld algorithm requires elementary operations sample size however estimation updated automatically data arrive sequentially another drawback compared recursive algorithms studied paper data must stored memory order elements whereas recursive technique require amount memory order performances recursive algorithms depend values tuning parameters value parameter often chosen previous empirical studies see cardot cardot shown thanks averaging step estimator performs well sensitive choice provided value small intuitive explanation could recursive process sense since deviations iteration unit norm finding universal values possible usual values interval fixed averaged recursive algorithm times faster weiszfeld approach see cardot asymptotic properties known seen averaged stochastic gradient estimator geometric median particular hilbert space asymptotic weak convergence estimator studied cardot shown theorem cardot theorem assumptions hold tends infinity stands convergence distribution limiting covariance operator explained cardot estimator efficient sense asymptotic distribution empirical risk minimizer related see derivation asymptotic normality multivariate case chakraborty chaudhuri general functional framework using delta method weak convergence hilbert spaces see dauxois cupidon one deduce theorem asymptotic normality estimated eigenvectors also proven see assumptions positive constant note finally non asymptotic bounds deviation around derived readily general results given cardot realistic case must also estimated complicated depends also estimated recursively data first state strong consistency estimators theorem assumptions hold lim kvn lim obtention rate convergence averaged recursive algorithm relies fine control asymptotic behavior algorithms stated following proposition theorem assumptions hold positive constant positive constant kvn obtention upper bound rate convergence order four algorithm crucial proofs furthermore following proposition ensures exhibited rate quadratic mean optimal one proposition assumptions positive constant kvn finally following theorem important theoretical result work shows spite fact considers observed data one one averaged recursive estimation procedure gives estimator classical parametric rate convergence norm theorem assumptions positive constant assuming eigenvalues multiplicity one deduced theorem lemma bosq convergence quadratic mean eigenvectors towards corresponding sign eigenvector illustration simulated real data small comparison classical robust pca techniques performed section considering data relatively high dimension samples moderate sizes permits compare approach classical robust pca techniques generally designed deal large samples high dimensional data comparison employed following well known robust techniques robust projection pursuit see croux croux minimum covariance determinant mcd see rousseeuw van driessen spherical pca see locantore computations made language development core team help packages pcapp rrcov reproductible research codes computing mcm posted cran gmedian package denote mcm recursive estimator defined mcm non negative modification whose learning weights defined size data large effective way estimating employ weiszfeld algorithm see weiszfeld vardi zhang well supplementary file description algorithms particular situation estimate obtained thanks weiszfeld algorithm denoted mcm following note optimization algorithms may preferred small dimension see fritz considered since would require computation hessian matrix whose size would lead much slower algorithms note finally alternative algorithms admit natural updating scheme data arrive sequentially completely ran new observation simulation protocol independent realizations random variable drawn mixture two distributions independent random variables random vector centered gaussian distribution covariance matrix min thought discretized version brownian sample path multivariate contamination comes different rates contamination controlled bernoulli variable independent three different scenarios see figure considered distribution elements vector independent realizations student distribution one degree freedom means first moment defined elements vector independent realizations student distribution reverse time student student figure sample trajectories three different contamination scenarios student degree freedom student degrees freedom reverse time brownian motion left right two degrees freedom means second moment defined vector distributed reverse time brownian motion gaussian centered distribution covariance matrix min covariance matrix averaged recursive algorithms considered tuning coefficients speed rate note values tuning parameters particularly optimised noted simulation results stable depend much value estimation error eigenspaces associated largest eigenvalues evaluated considering squared frobenius norm associated orthogonal projectors denoting orthogonal projector onto space generated eigenvectors estimation covariance matrix associated largest eigenvalues consider following loss criterion oracle mcd mcm mcm mcm sphpca figure estimation errors logarithmic scale monte carlo replications contamination mcm stands estimation performed weiszfeld algorithm whereas mcm denotes averaged recursive approach mcm non negative modification see equation means eigenspaces note always generated true estimated eigenvectors orthogonal comparison classical robust pca techniques first compare performances two estimators mcm based weiszfeld algorithm recursive algorithms see classical robust pca techniques generated samples size dimension replications different levels contamination considered dimensions first eigenvalue covariance matrix represents total variance second one median errors estimation eigenspace generated first two eigenvectors according criterion given table non contaminated data drawn different approaches distribution estimation error figure dimension large expected oracle classical pca situation provides best estimations eigenspaces mcd median covariation matrix estimated weiszfeld algorithm oracle mcd mcm mcm mcm sphpca pca figure estimation errors logarithmic scale monte carlo replications contamination distribution degrees freedom mcm stands estimation performed weiszfeld algorithm whereas mcm denotes averaged recursive approach mcm non negative modification learning steps pca mcd mcm mcm mcm sphpca dimension table median estimation errors according criterion non contaminated samples size monte carlo experiments modified mcm recursive estimator behave well similarly note dimension gets larger mcd used anymore mcm effective robust estimator eigenspaces data contaminated median errors estimation eigenspace generated first two eigenvectors according criterion given drawn table figure distribution estimation error different approaches make following remarks first note even level contamination small performances classical pca strongly affected presence outlying values large dimensions mcd algorithm mcm estimation provide best estimations original two dimensional eigenspace whereas gets larger mcd estimator used anymore construction mcm estimators obtained weiszfeld non negative recursive algorithm remain accurate also remark recursive mcm algorithms designed deal large samples performs well even moderate sample sizes see also figure modification descent step suggested corresponds estimator mcm permits improve accuracy initial mcm estimator specially noise level small performances spherical pca slightly less accurate whereas median error robust always largest among robust estimators contamination highly structured temporally level contamination small contamination reverse time brownian motion behavior mcm different robust estimators criterion appear less effective however one think presence two different populations completely different multivariate correlation structure mcd completely ignores part data necessarily better behavior online estimation principal components consider experiment high dimension evaluate ability recursive algorithms defined estimate recursively eigenvectors associated largest eigenvalues note due high dimension data limited computation time make comparison recursive robust techniques classical pca generate growing samples compute sample size approximation error different fast strategies true eigenspace generated method pca mcd sph pca mcm weiszfeld mcm mcm pca mcd sph pca mcm weiszfeld mcm mcm pca mcd sph pca mcm weiszfeld mcm mcm pca mcd sph pca mcm weiszfeld mcm mcm inv inv dimension table median estimation errors according criterion datasets sample size monte carlo experiments eigenvectors associated largest eigenvalues drawn figure evolution mean replications approximation error dimension function sample size samples contaminated degrees freedom student distribution rate important fact recursive algorithm approximates recursively eigenelements behaves well see nearly difference spectral decomposition denoted mcm figure estimates produced sequential algorithm sample sizes larger hundreds also note error made classical pca always high decrease sample size robust pca audience last example high dimension large sample case individual audiences measured french company every minutes panel people period hours see cardot detailed presentation data classical pca first eigenspace represents total variability whereas second one reproduces total variance third one fourth one thus variability data captured four dimensional space taking account large dimension data values indicate high temporal correlation large dimension data weiszfeld algorithm well robust pca techniques used anymore reasonable time personal computer mcm computed thanks recursive algorithm given approximately minutes laptop language without specific routine seen figure first two eigenvectors obtained classical pca robust pca based mcm rather different confirmed relatively large distance two corresponding eigenspaces first robust eigenvector puts stress time period comprised minutes minutes whereas first non robust eigenvector focuses smaller intensity larger period time comprised minutes second robust eigenvector differentiates people watching period minutes negative value second principal component people watching minutes positive value second principal component rather surprisingly third fourth eigenvectors non robust robust covariance matrices look quite similar see figure proofs give section proofs theorems proofs rely several technical lemmas whose proofs given supplementary file estimation error pca oracle mcm figure estimation errors eigenspaces criterion classical pca oracle pca recursive mcm estimator recursive estimation eigenelements static estimation based spectral decomposition eigenelements mcm audience pca mcm pca mcm minutes minutes figure audience data measured september minute scale comparison principal components classical pca black robust pca based median covariation matrix red first eigenvectors left second eigenvectors right audience pca mcm pca mcm minutes minutes figure audience data measured september minute scale comparison principal components classical pca black robust pca based mcm red third eigenvectors left fourth eigenvectors right proof theorem let recall algorithm defined recursively since thus sequence martingale differences adapted filtration indeed algorithm written follows moreover considered stochastic gradient algorithm decomposed follows finally linearizing gradient following lemma gives upper bounds remainder terms proof given supplementary file lemma assumptions bound three remainder terms first kvn way krn kmn finally kmn kvn deduce decomposition kvn hvn hvn krn hrn hrn note furthermore krn using fact sequence martingale differences adapted filtration kvn hvn hrn let kvn hvn krn moreover applying lemma theorem get positive constant krn kmn thus since hvn theorem see duflo instance ensures kvn converges almost surely finite random variable hvn furthermore induction inequality becomes krk since applying theorem lemma positive constant krk thus positive constant kvn since converges almost surely one conclude proof almost sure consistency arguments proof theorem cardot convexity properties given section supplementary file finally almost sure consistency obtained direct application topelitz lemma see lemma duflo proof theorem proof theorem relies properties moments given following three lemmas properties enable application markov inequality control probability deviations robbins monro algorithm lemma assumptions integer positive constant kvn lemma assumptions positive constants sup kvk kvn integer part real number lemma assumptions integer rank positive constants kvn kvn prove theorem let choose integer thus applying lemma positive constants rank kvn kvn let choose note one check rank help strong induction going prove announced results say positive constants defined lemma kvn kvn first let choose max max thus kvk kvk suppose previous inequalities verified applying lemma induction sup kvk sup since since factorizing definition way applying lemma induction kvn kvn since factorizing definition concludes induction proof proof theorem order prove theorem first recall following lemma lemma let random variables taking values normed vector space positive constant kyk real numbers integer kyk prove theorem let rewrite decomposition follows pelletier sum equalities apply abel transform divide get bound quadratic mean term side previous equality first applying theorem moreover since application lemma theorem gives ktk since way since ktn applying lemma theorem ktk since krn kmn since moreover let positive constant kmn krk kmn since kmn kvn schwarz inequality lemma give kmn kvn kvn kmn applying theorem theorem since finally one easily check since sequence martingale differences adapted filtration thus positive constant let smallest eigenvalue proposition supplementary file announced result proven concluding remarks simulation study illustration real data indicate performing robust principal components analysis via median covariation matrix bring new information compared classical pca interesting alternative classical robust principal components analysis techniques use recursive algorithms permits perform robust pca large datasets outlying observations may hard detect another interest use sequential algorithms estimation median covariation matrix well principal components performed online automatic update new observation without obliged store data memory simple modification averaged stochastic gradient algorithm proposed ensures non negativeness estimated covariation matrices modified algorithms better performances simulated data deeper study asymptotic behaviour recursive algorithms would certainly deserve investigations proving asymptotic normality obtaining limiting variance sequence estimators unknown would great interest challenging issue beyond scope paper would require study joint weak convergence two simultaneous recursive averaged estimators use mcm could interesting robustify estimation many different statistical models particularly functional data example could employed alternative robust functional projection pursuit robust functional time series prediction robust estimation functional linear regression introduction median matrix acknowledgements thank company allowing illustrate methodologies data also thank peggy careful reading proofs estimating median covariation matrix weiszfeld algorithm suppose fixed size sample want estimate geometric median iterative weiszfeld algorithm relies fact solution following optimization problem min kxi satisfies weights defined kxi kxj weiszfeld algorithm based following iterative scheme consider first pilot estimator step new approximation given iterative procedure stopped precision known advance final value algorithm denoted calculated estimator mcm computed similarly suppose defined step step new approximation procedure stopped precision fixed advance note construction algorithm leads estimated median covariation matrix always non negative convexity results section first give recall convexity properties functional following one gives information spectrum hessian proposition assumptions admits orthonormal basis composed eigenvectors let denote set eigenvalues moreover positive constant finally continuity positive constants proof similar one cardot consequently given furthermore cardot ensures local strong convexity shown following corollary corollary assumptions positive constant positive constant finally following lemma gives upper bound remainder term taylor expansion gradient lemma assumptions proof lemma let since proof lemma cardot assumptions one check concludes proof decompositions algorithm proof lemma let recall algorithm defined recursively let remark sequence martingale differences adapted filtration algorithm written follows finally let consider following linearization gradient proof lemma bound corollary lemma bounding krn let recall define random function defined note thus dominated convergence krn sup moreover one check bound term side previous equality first applying inequality using fact khk thus since thk khk khk khk khk way khk applying inequality thus since thk khk khk since positive constants khk khk khk finally khk khk applying inequalities announced result proven krn kmn bounding define random function note dominated convergence sup moreover bound krn one check application inequality finally khk thk khk khk khk announced result follows application inequality kmn kvn proofs lemma proof lemma using decomposition kvn hvn hvn krn hrn hrn note moreover krn since convex function get inequality kvn hrn let let recall krn kmn prove induction integer positive constant kvn case studied proof theorem let suppose positive constant kvn bounding kvn let apply inequality use fact sequence martingales differences adapted filtration hvn kvn krn kvn let denote second term side inequality applying inequality since hvn kvn krn kvn kvn kvn krn kvn help lemma kvn kkf kvn krn kvn applying inequality kvn kvn kvn induction kvn way applying inequality induction kvn kkf kvn kvn since similarly since krn since applying inequality induction krn kvn kpf kvn kpf finally applying inequalities positive constant hvn kvn krn kvn denote first term side inequality help lemma applying inequality hrn kvn kvn krn kkf kvn kkf kvn kvn moreover let krn kkf kvn kkf kvn kvn krn kkf kvn induction kvn moreover krn kkf kvn krn kvn krn kvn applying inequality induction since krn kvn krn kvn moreover applying theorem inequality since krn kmn krn kvn kmn kvn kvn finally kvn max kvn kvn thus positive constants kvn finally thanks inequalities positive constants kvn concludes induction proof proof lemma let define following linear operators using decomposition induction study asymptotic behavior linear operators cardot one check positive constants integers kop usual spectral norm linear operators bound quadratic mean term decomposition step quasi deterministic term applying inequality positive constant term converges exponentially fast step martingale term since sequence martingale differences adapted filtration moreover cardot lemma ensures positive constant step first remainder kmn remarking krn krk kmk applying lemma theorem kmk applying inequality splitting sum two parts applying lemma thus positive constant step second remainder term let recall kmn kvn thus kmk kvk applying lemma kmk kvk thanks lemma positive constant kvn thus applying inequality theorem step splitting sum two parts one check positive constant step third remainder term since kvn applying lemma kvk kvk thanks lemma positive constant kvn thus splitting sum two parts applying inequalities lemma positive constant sup sup kvk kvk thus positive constant sup kvk conclusion applying lemma decomposition kvn applying inequalities positive constants sup kvk kvn proof lemma let define use decomposition kwn krn hrn hrn since krn fact get application inequality kwn krn kvn thus since sequence martingale differences adapted filtration since kwn kvn inequality follows proposition fact convex application kwn krn kwn kvn kvn krn kvn krn kvn since krn applying inequality positive constants kwn krn kwn kvn kvn bound two first terms side inequality step bounding kwn since applying proposition one check kwn kvn hvn kvn hvn since convex application kwn kvn let positive integer introduce sequence events defined kvn kmn defined proposition sake simplicity consider defined proposition verifies applying proposition let kvn kvn kvn kvn way since gmn convex let kvn kvn kvn kvn applying proposition kvn kvn kvn kvn kvn kvn kvn kvn rank thus applying inequalities kvn kwn thus positive constant rank kwn kvn kvn must get upper bound kwn since kwn kvn since positive constant kvn kvn kwn acn kmn kvn applying markov inequality theorem lemma kwn taking kwn thus applying inequalities positive constants rank kwn kvn step bounding krn kwn kvn since kwn kvn applying lemma let krn kwn kvn krn kvn krn kvn kvn krn kvn since krn kmn mkf applying theorem godichonbaggioni kmn kvn kvn kvn step conclusion applying inequalities rank positive constants kvn kvn technical inequalities first following lemma recalls inequalities lemma let positive constants moreover let positive integers positive constants akj following lemma gives asymptotic behavior specific sequences descent steps lemma let constants two sequences defined thus positive constant integer part function proof lemma first prove inequality moreover thus prove inequality help integral test convergence thus help integral test convergence rank sake simplicity consider since thus conclusion references bali boente tyler wang robust functional principal components approach annals statistics bosq linear processes function spaces volume lecture notes statistics new york theory applications cardot chaouch stochastic approximation multivariate functional median lechevallier saporta editors compstat pages physica verlag springer cardot online estimation geometric median hilbert spaces non asymptotic confidence balls annals statistics appear cardot monnez fast recursive algorithm clustering large datasets computational statistics data analysis cardot zitt efficient fast estimation geometric median hilbert spaces averaged stochastic gradient algorithm bernoulli cardot degras online principal components analysis algorithm choose technical report chakraborty chaudhuri spatial distribution infinite dimensional spaces related quantiles depths annals statistics chaudhuri multivariate location estimation using extension type approach ann croux filzmoser oliveira algorithms robust principal component analysis chemometrics intelligent laboratory systems croux high breakdown estimators principal components approach revisited multivariate cupidon gilliam eubank ruymgaart delta method analytic functions random operators application functional data bernoulli dauxois pousse romain asymptotic theory principal components analysis random vector function applications statistical inference journal multivariate analysis devlin gnanadesikan kettenring robust estimation dispersion matrices principal components amer statist duflo random iterative models volume applications mathematics new york berlin translated french original stephen wilson revised author fritz filzmoser croux comparison algorithms multivariate comput gervini robust functional estimation using median spherical principal components biometrika estimating geometric median hilbert spaces stochastic gradient algorithms almost sure rates convergence multivariate analysis huber ronchetti robust statistics john wiley sons second edition hubert rousseeuw van aelst robust multivariate methods statistical science hyndman ullah robust forecasting mortality fertility rates functional data approach computational statistics data analysis jolliffe principal components analysis springer verlag new york second edition kemperman median finite measure banach space statistical data analysis based related methods pages amsterdam kraus panaretos dispersion operators resistant functional data analysis biometrika locantore marron simpson tripoli zhang cohen robust principal components functional data test maronna martin yohai robust statistics wiley series probability statistics john wiley sons chichester theory methods mokkadem pelletier convergence rate averaging nonlinear stochastic approximation algorithms ann appl nordhausen oja asymptotic theory spatial median nonparametrics robustness modern statistical inference time series analysis festschrift honor professor jana volume pages ims collection pelletier asymptotic almost sure efficiency averaged stochastic algorithms siam control polyak juditsky acceleration stochastic approximation siam control optimization development core team language environment statistical computing foundation statistical computing vienna austria isbn ramsay silverman functional data analysis springer new york second edition rousseeuw van driessen fast algorithm minimum covariance determinant estimator technometrics taskinen koch oja robustifying principal components analysis spatial sign vectors statist probability letters vardi zhang multivariate associated data depth proc natl acad sci usa weiszfeld point sum distances given points minimum tohoku math weng zhang hwang candid incremental principal component analysis ieee trans pattern anal mach
10
learning elm network weights using linear discriminant analysis philip jonathan van marcs institute university western sydney penrith nsw australia abstract present alternative method determining hidden output weight values extreme learning machines performing classification tasks method based linear discriminant analysis provides bayes optimal single point estimates weight values keywords extreme learning machine linear discriminant analysis hidden output weight optimization mnist database introduction extreme learning machine elm feedforward neural network offers fast training flexible function classification tasks principal benefit network parameters calculated single pass training process standard form input layer fully connected hidden layer activation functions hidden layer fully connected output layer linear activation functions number hidden units often much greater input layer hidden units per input frequently used key feature elms weights connecting input layer hidden layer set random values simplifies requirements training one determining hidden output unit weights achieved single pass randomly projecting inputs much higher dimensionality possible find hyperplane approximates desired regression function represents linear separable classification problem common way calculating hidden output weights use moorepenrose applied hidden layer outputs using labelled training data paper present alternative method hidden output weight calculation networks performing classification tasks advantage method method weights best single point estimates bayesian perspective linear output stage using network architecture random values input hidden layer weights applied two weight calculation methods mnist database demonstrated method offers performance advantage methods consider particular sample input data series index length series forward propagation local signals network described wnm wml output vector corresponding input vector input layer index number input features respectively hidden layer index number hidden units respectively output layer index number output units respectively weights associated input hidden layer hidden output layer linear sums respectively hidden layer activation function elm assigned randomly simplifies training requirements task optimisation choice linear output neurons simplifies optimisation problem single pass algorithm weight optimisation problem stated wnm wml restate matrix equation using elements wnm column contains outputs hidden unit one instant series output column contains output network one instance series follows optimisation problem involves determining matrix given series desired outputs series hidden layer outputs represent desired outputs using target vectors value row corresponding desired class elements example indicates desired target class four classes restate desired targets using matrix column contains desired targets network one instance series substituting desired outputs optimization problem involves solving following linear equation output weight calculation using elm literature often determined taking rows linearly independent normally true maybe calculated using minimises sum square error networks outputs desired outputs minimises refer method output weight calculation note cases classification problem may necessary regularize solution using standard methods tikhonov regularization ridge regression output weight calculation using linear discriminant analysis paper develop alternative approach estimating based maximum likelihood estimator assuming linear model refer method equivalent applying linear discriminant analysis hidden layer outputs presentation based notation ripley problem bayes rule states posterior probability nth class related prior probability class density function input data vector case hidden layer output parameters class density function class densities modelled gaussian model common covariance class dependent mean vectors given input vector class density exp set dimension gaussian model equal number hidden units defined hidden unit output hence begin training data partitioned according class membership labelled data vectors hidden unit outputs members belong class given set hidden unit output data class membership likelihood function formed using aim find values maximise given set training data equivalently maximise value log log log substituting gaussian model get log log log maximized determined training data need find values begin substituting bringing exponential function removing common numerator denominator term giving exp log exp log expanding terms removing atk numerator denominator get exp exp log classification performed choosing class highest value monotonic function use either function deciding final class choose use defined linear function input data vector used determine network follows log log note constant term introduced hidden output layer weights first row want determine posterior probabilities use applied network outputs summary method summary calculating proceeds follows partitioned hidden unit output data according class membership labelled data vectors members belong class calculate iii set prior probabilities calculate log log classify new data calculate network output response hidden layer output optional calculate posterior probabilities exp exp iii final decision network output highest value equivalently combining classifiers equation provides easy way combine outputs multiple classifiers posterior probabilities calculated class classifier form combined posterior probability choose class highest combined posterior probability schemes unweighted averaging across posterior probability outputs one simple schemes experiments applied weight calculation method mnist handwritten digit recognition problem authors avs previously reported good classification results using elm database database training testing examples example pixel level grayscale image handwritten digit classes approximately equally distributed training testing sets elm algorithms applied directly unprocessed images trained networks providing data batch mode random values input layer weights uniformly distributed prior probabilities classes set order perform direct comparison two methods used following protocol hidden units per input repeat times assign random values input layer weights determine hidden layer outputs training data examples determine network weights using data iii determine network weights using data evaluate networks test data examples store results averaged results repeats experiment compared misclassification rates results shown fig table error fig error rate mnist database varying results averaged repeats experiment results show outperforms every value average performance benefit decrease error rate larger benefit smaller values table shows little extra computational requirement method table error rate percentage improvement mnist database results averaged repeats fanout improvement table computation times seconds elapsed time shown training networks images mnist database testing images using matlab code running sony vaio series laptop intel processor ram last experiment performed investigated combining multiple networks using averaging posterior probabilities investigated using ensemble number repeated training testing times ensemble number averaged results shown fig error ensemble fig error rate mnist database ensemble number varying result ensemble number averaged repeats experiment results shown fig demonstrate benefit combining multiple ldaelm networks mnist database combining two networks reduced error rate adding networks reduced error best error rate achieved networks combined discussion results mnist database shown fig suggest performance benefit gained using output weight calculation method small extra computation overhead believe viable alternative method especially small values another benefit ability combine outputs networks combining posterior probabilities estimates individual networks applied mnist database able reduce error rate result comparable best performance layer neural networks processing raw data work include comparing two weight calculation methods publicly available databases abalone iris data sets conclusion presented new method weight calculation hidden output weights elm networks performing classification tasks method based linear discriminant analysis requires modest amount extra calculation time compared method applied mnist database average misclassification rate improvement comparison method identically configured initialized networks bibliography huang zhu siew extreme learning machine theory applications neurocomputing vol tapson van schaik learning pseudoinverse solution network weights neural networks vol penrose todd best approximate solutions linear matrix equations mathematical proceedings cambridge philosophical society vol ripley pattern recognition neural networks cambridge univ press kuncheva combining pattern classifiers methods algorithms wiley press lecun cortes mnist database handwritten digits available http lecun bottou bengio haffner learning applied document recognition proceedings ieee vol data sets uci machine learning repository http
9
recurrent orthogonal networks tasks mikael henaff new york university facebook research mar arthur szlam facebook research mbh nyu edu aszlam facebook com yann lecun new york university facebook research abstract although rnns shown powerful tools processing sequential data finding architectures optimization strategies allow model long term dependencies still active area research work carefully analyze two synthetic datasets originally outlined hochreiter schmidhuber used evaluate ability rnns store information many time steps explicitly construct rnn solutions problems using constructions illuminate problems way rnns store different types information hidden states constructions furthermore explain success recent methods specify unitary initializations constraints transition matrices introduction recurrent neural networks rnns powerful models naturally suited processing sequential data maintain hidden state encodes information previous elements sequence classical version rnn elman every timestep hidden state updated function input current hidden state theory recursive procedure allows models store complex signals arbitrarily long timescales however practice rnns considered difficult train due vanishing exploding gradient problems bengio problems arise proceedings international conference machine learning new york usa jmlr volume copyright author yann nyu edu spectral norm transition matrix significantly different due transition functions spectral norm transition matrix greater gradients grow exponentially magnitude backpropagation known exploding gradient problem spectral norm less gradients vanish exponentially quickly known vanishing gradient problem recently simple strategy clipping gradients introduced proved effective addressing exploding gradient problem mikolov problem vanishing gradients shown difficult various strategies proposed years address one successful approach known long memory lstm units hochreiter schmidhuber modify architecture hidden units introducing gates explicitly control flow information function state input specifically signal stored hidden unit must explicitly erased forget gate otherwise stored indefinitely allows information carried long periods time lstms become successful applications language modeling machine translation speech recognition zaremba sutskever graves methods proposed deal learning dependencies adding separate contextual memory mikolov stabilizing activations krueger mimesevic using sophisticated optimization schemes martens sutskever two recent methods propose directly address vanishing gradient problem either initializing parameterizing transition matrix orthogonal unitary matrices arjovsky works used set synthetic problems originally outlined hochreiter schmidhuber variants thereof testing ability methods learn dependencies synthetic problems recurrent orthogonal networks tasks designed pathologically difficult require models store information long timescales hundreds timesteps different approaches solved problems varying degrees success martens sutskever authors report hessianfree optimization based method solves addition task timesteps authors krueger mimesevic reported method beat chance baseline adding task cases irnn reported solve addition task method proposed arjovsky able solve copy task timesteps able completely solve addition task timesteps partially solves work analyze tasks construct explicit rnn solutions solutions illuminate tasks provide theoretical justification success recent approaches using orthogonal initializations unitary constraints transition matrix rnn particular show classical elman rnn transition random orthogonal initialization high probability close explicit rnn solution sequence memorization task network architecture identity initialization close explicit solution addition task verify experimentally initializing correctly random orthogonal identity critical success tasks finally show pooling used allow model choose memory several works studied properties orthogonal matrices relation neural networks work saxe gives exact solutions learning dynamics deep linear networks based analysis suggests orthogonal initialization scheme accelerate learning authors white ganguli study ability linear rnns orthogonal generic transition matrices respectively store scalar sequences hidden state show memory capacity scales number hidden units work complements providing related analysis discrete input sequences architectures review recurrent neural network rnn architectures processing sequential data discuss modifications use long memory problems fix following notation input sequences denoted output sequences denoted srnn srnn elman consists transition matrix decoder matrix output dimension encoder matrix input dimension bias either output input categorical respectively number classes use representation srnn ingests sequence keeps running updates hidden state using hidden state decoder matrix produces outputs input output hidden state respectively time great improvements training srnns since introduction shown powerful models tasks language modeling mikolov still difficult train generic srnns use information inputs hundreds timesteps previous computing current output bengio pascanu following use simplification srnns makes sense less powerful models makes easier train solve simple tasks namely placing input hidden state rather hidden state output obtain rnns linear transitions case categorical inputs using call update equations finally note appropriately scaling weights biases srnn made approximate way around course optimization may never find scaling lstm lstm hochreiter schmidhuber architecture designed improve upon srnn introduction simple memory cells gating architecture work use architecture originally proposed hochreiter schmidhuber memory cell network computes output four gates update gate input gate forget gate output gate outputs gates recurrent orthogonal networks tasks copying problem tanh cell state updated function input previous state finally hidden state computed function cell state output gate tanh relatively common variation original lstm involves adding peephole connections gers allows information flow cell state various gates variant originally designed measure generate precise time intervals proven successful speech recognition sequence generation graves graves pooling consider initialized either random orthogonal transition matrices identity transitions see large difference behavior initializations however set architecture random orthogonal initialization behaves much closer identity initialization using pooling layer output feed pooled unpooled hidden layer decoder model choose whether wants randomorthogonal like representation fix pool size update equations model dimensional vector hkd dimensional vector defined task tests network ability recall information seen many time steps previously follow setup arjovsky briefly outline let set symbols pick numbers input consists length vector categories starting entries sampled uniformly sequence remembered next inputs set blank category following single input represents delimiter indicating network output initial entries input last inputs set required output sequence consists entries followed first entries input sequence exactly order task minimize average predictions time step amounts remembering categorical sequence length time steps solution mechanism write solution problem write descriptions equation note since inputs categorical assume used fix number pick random integer drawn uniformly let cos sin sin cos define block diagonal matrix note iterating spins different rates synchronize multiples thus acts clock period set matrix columns sampled uniformly unit sphere form appending two zero columns one extra row entry entry entry schematically tasks section describe tasks hochreiter schmidhuber arjovsky involve dependencies long timescales designed pathologically hard srnn finally set except scale column zero column also zero entries recurrent orthogonal networks tasks gives show rnn operates starting overview last dimension hidden state divides state space two regions one model outputs blank symbol outputs one first symbols dictionary model begins first region remains encounters delimiter symbol sends second symbols input sequence encoded hidden state rotation applied timestep key result rotation powers hides symbol encoded hidden state decorrelates current representation original one due periodicity timesteps different symbols input sequence surface hidden state one time order seen symbols sequence whose representations rotations applied remain hidden causes output units symbols fire correct order give precise description need little notation denote first coordinates last coordinate denote jth column rnn works follows initialized hidden state first inputs figure success percentage mechanism copy problem computed trials sum continues cycle giving output following briefly argue uij small large enough repeatedly use variance sum independent random variables grows sum variances denote uji pair coordinates jth column corresponding ith block since uniform sphere expect since fixed choices definition qpi independent utji qpi uji mean zero since utj uij utji qpi uji expect next inputs changes incrementing step note far best match large negative last component time token seen set positive ensures blank symbol output moreover since uniform sphere similarly expect time uij uij argue large enough high probability uij small multiplication max value thus fix small number say choose psd large enough high probability uij even though wit finally weak dependence fixed exponentially unlikely nearest neighbor close enough interfere recurrent orthogonal networks tasks solution mechanism suggests random orthogonal matrix chosen example via decomposition gaussian matrix good starting point solving task construction invariant rotations always find basis given orthogonal matrix block form basis thus necessary descent nudge eigenvalues orthogonal matrix roots unity already basic form construction also gives good explanation performance models used copy problem arjovsky added hidden state hand exactly added mechanism known least implicitly although know written explicitly least since hochreiter schmidhuber seen simple lstm model following gates finally note although used setup arjovsky construction modified solve problems hochreiter schmidhuber equation solution mechanism experiments comparison tasks since construction copy mechanism randomized provide experiment show solution degrades function dictionary size length sequence remembered strong dependence length time remember sequence figure shows number successes runs note matrix mechanism adding problem identity build redundant solution using larger identity matrix describe identity using block structure matrix defined copy task namely hand copy task acts clock synchronizes fixed number steps important mechanism described clock looks random time example instead used block mechanism would succeed transition matrices addition task copy task thus opposites sense addition mass unit circle copy uniformly distributed unit circle possible variable ength opy roblem note solution mechanism copy problem depends fixed location regurgitating input experiments also discuss variant copy task symbol indicate memorized sequence must output randomly located considered variant task hochreiter schmidhuber know bounded explicit srnn solution variable length problem although solution using multiplicative rnn instead srnn keeping extra hidden variables track power solves adding problem adding problem requires network remember two marked numbers long sequence add specifically input consists two dimensional sequence first coordinate uniformly sampled second coordinate save two two entries required output xji solution mechanism problem simple explicit solution using ltrnn relu one dimensional hidden state namely set time step nothing experiments show hard learn adding task transition matrix initialized random orthogonal matrix easy initialized identity copy task one way get unified solution use pooling initialized matrix distributed uniformly decoder choose use pooled hiddens away phase appear adding task use raw hiddens experiments impact initialization based analysis hypothesize ltrnn random orthogonal initialization denoted ltornn perform well sequence memorization problem identity initialization denoted perform well addition task test conducted following experiment copy addition task different timescales recurrent orthogonal networks tasks opy opy lstm lstm cros entropy cros entropy ite tions hundre ite tions hundre figure results copy task lstm mse mse mse lstm lstm ite tions hundre ite tions hundre ite tions hundre figure results addition task task timescale trained different random seeds transformation matrices models intialized using gaussian distribution mean variance number incoming connections hidden unit projected transition matrix nearest orthogonal matrix setting singular values experiments used rmsprop train networks fixed learning rate decay rate preliminary experiments tried different learning rates chose largest one loss diverge used also included lstms experiments baseline used method pick learning rate ended experiments normalized gradients respect hidden activations denotes number timesteps preliminary experiments also found models activations frequently exploded whenever largest singular value transition matrix became much greater therefore adopted simple activation clipping strategy rescaled activations magnitude whenever magnitude exceeded experiments chose figure shows results copy task lstm networks trained hidden units see lstm difficulty beating baseline performance outputting empty symbol however eventually converge solution shown figure however solves task almost immediately note behavior similar urnn arjovsky paramaterized way makes easy recover explicit solution described ltirnn never able find solution figure shows results addition task timesteps networks trained hidden units trained single ltornn due time constraints contrast copy task able efficiently solve recurrent orthogonal networks tasks lstm mse mse ite tions hundre lstm ros ntropy ite tions hundre opy lstm opy ros ntropy lstm ite tions hundre ite tions hundre figure results copy addition task pooling architectures note lstm eventually solve copy task problem whereas able solve long time lstm also able easily solve task consistent original work hochreiter schmidhuber authors report solving task timesteps note lstm baseline differs arjovsky reported difficulty solving addition task hypothesize difference due use different variants lstm architecture peephole connections pooling experiments next ran series experiments examine effect feeding pooled outputs decoder see could obtain good performance copy addition tasks single architecture initialization experiments added soft penalty transition matrix keep orthogonal throughout training cally every iteration applied one step stochastic gradient descent minimize loss evaluated random points unit sphere note requires operations regular update requires operations adding soft constraint negligible computational overhead experiments set minibatch size pooling experiments used pool size stride results shown figure pooling easily able solve copy task timescales approximately solves addition task timescales well even though convergence slower success copy task surprising since zeroing matrix equation solve problem solution regular good performance adding task somewhat interesting gain insight network stores information stable manner recurrent orthogonal networks tasks ria ble opy lstm ros ntropy ite tions hundre figure results variable length copy task approximately orthogonal transition matrix plotted activations hidden states time processes input sequence displayed figure observe relatively constant activations first marked number encountered triggers oscillatory patterns along certain dimensions second marked number seen existing oscillations amplified new ones emerge suggests network stores information stably radius hidden state rotations along different subspaces information recovered phase discarded though pooling operation thus model uniform clocklike oscillations perceived pooling variable length copy task seen stark impact initialization performance copy addition task mitigation addition pooling layer tested models problem roughly fixed size solution mechanism namely variable length copy task figure shows performance pooling lstm hidden units variable length copy task timesteps even though number timesteps significantly less tasks none able beat chance baseline whereas lstm able solve task even though convergence slow experiment classic example detail construction synthetic benchmark favor model way fails generalize tasks figure activation patterns pooling network two marked numbers added occur positions conclusion work analyzed two standard synthetic memory problems provided explicit rnn solutions found fixed length copy problem solved using rnn transition matrix root identity matrix whose eigenvalues well distributed unit circle remarked random orthogonal matrices almost satisfy description also saw addition problem solved transition matrix showed correspondingly initializing allows rnn easily optimized solving addition task initializing random orthogonal matrix allows easy optimization copy task flipping leads poor results suggests optimization difficulty transitioning oscillatory steady dynamics mitigated adding pooling layer allows model easily choose two regimes finally experiment variable length copy task illustrates although synthetic benchmarks useful evaluating specific capabilities given model success necessarily generalize across different tasks novel model architectures evaluated broad set benchmarks well natural data references arjovsky shah bengio unitary evolution recurrent neural networks september url http bengio simard frasconi learning dependencies gradient descent difficult ieee transactions neural networks recurrent orthogonal networks tasks elman jeffrey finding structure time cognitive science ganguli surya huh dongsung sompolinsky haim memory traces dynamical systems doi gers felix schraudolph nicol schmidhuber learning precise timing lstm recurrent networks mach learn march issn doi url http graves alex generating sequences recurrent neural networks september url http graves alex mohamed hinton geoffrey speech recognition deep recurrent neural networks ieee international conference acoustics speech signal processing icassp vancouver canada may doi url http hochreiter schmidhuber long memory neural computation krueger mimesevic regularizing rnns stabilizing activations september url http langley crafting papers machine learning langley pat proceedings international conference machine learning icml stanford morgan kaufmann quoc jaitly hinton simple way initialize recurrent networks rectified linear units september url http martens sutskever learning recurrent neural networks optimization proceedings international conference machine learning icml bellevue washington usa june july mikolov joulin chopra learning longer memory recurrent neural networks september url http mikolov statistical language models based neural networks phd thesis url http pascanu razvan mikolov tomas bengio yoshua difficulty training recurrent neural networks proceedings international conference machine learning icml atlanta usa june saxe andrew mcclelland james ganguli surya exact solutions nonlinear dynamics learning deep linear neural networks url http cite sutskever ilya vinyals oriol quoc sequence sequence learning neural networks advances neural information processing systems annual conference neural information processing systems december montreal quebec canada white lee sompolinky memory orthogonal neural networks physical review letters issn zaremba sutskever vinyals recurrent neural network regularization september url http
9
mint mutual information based transductive feature selection genetic trait prediction oct dan irina david simon zivan laxmi ibm watson research yorktown heights usa dhe rish dhaws parida limagrain europe chappes research center chappes france introduction whole genome prediction complex phenotypic traits using genotyping arrays recently attracted lot attention relevant fields plant animal breeding genetic epidemiology given set biallelic molecular markers snps genotype values encoded collection plant animal human samples goal predict values certain traits usually highly polygenic quantitative modeling simultaneously marker effects unlike traditional gwas rrblup used widely trait prediction builds linear model fitting genotypes coefficient computed marker considered measure importance marker underlying hypothesis normal distribution marker effects well suited highly polygenic traits computations fast robust one used models whole genome prediction popular predictive models lasso ridge regression bayes bayes bayes bayesian lasso etc number genotypes generally much bigger number samples predictive models suffer curse dimensionality curse dimensionality problem affects computational efficiency learning algorithms also lead poor performance mainly correlation among markers feature selection considered successful solution problem subset important features selected predictive models trained features popular criterion feature selection called mrmr selected features maximally relevant class value simultaneously minimally dependent method mrmr proposed greedily selects features maximize relevance minimize redundancy mrmr applied successfully various applications transductive learning first introduced vapnik assumes test data predictor variables markers available learning algorithms target variable values test samples course unknown therefore models built training test data usually lead better predictive performance test data work proposed transductive feature selection method mint based information theory mint applies mrmr criterion integrates test data natural way feature selection process dynamic programming algorithm developed speed selection process experiments simulated real data show mint generally achieves similar better results mrmr relies training data knowledge first transductive feature selection method based mrmr criterion methods popular criterion feature selection mrmr searches features satisfying equation measures mean value mutual information values individual feature class variable max selected features mutual information however feature selection based tends select features high redundancy namely correlations selected features tend big remove features highly correlated features respective power would change much therefore proposed select mutually exclusive features min operator defined combine two equations optimized time max work based mrmr criterion proposed novel method mint mutual information based transductive feature selection targets feature selection training data unlabeled test data developed dynamic programming based greedy algorithm efficiently select features observe mrmr criterion two components one maximum relevance one minimum redundancy two components independent maximum relevance requires calculation mutual information selected features target variable transductive learning target variable values test samples available component remains untouched minimum redundancy hand calculates mutual information among selected features target variable values involved therefore make method transductive including test samples component help improve estimation mutual information applied incremental search strategy used effectively find features defined equation incremental algorithm works following assume feature set already generated contains features feature needs selected set maximizes following objective function ctraining maxxj xtraining xtraining denotes feature vector including training data denotes feature vector including training test data ctraining denotes class value vector including training data mutual information next propose efficient greedy algorithm incrementally select features based dynamic programming strategy motivation operation simplicity ignore superscripts training test need every since features added incremental manner differences feature therefore need sum mutual information two sums different therefore save sum every step reuse next step complexity dynamic programming algorithm number selected features number total features experimental results compare predictive performance rrblup full set variables versus performance subsets variables different size selected mrmr mint referred mrmr rrblup mint rrblup respectively similar results included due space restrictions obtained applying predictive methods features selected mrmr mint experiments perform measure average coefficient determination computed square pearson correlation coefficient true predicted outputs higher indicates better performance simulated data method based mrmr criterion experiment different levels relevance redundancy show performance mint relies components randomly simulate different data sets parameter settings report average results first simulate target variable vector following multivariate uniform distribution simulate features noise vector following multivariate normal distribution thus features noisy bad features less noisy good simulate good features bad features results shown table case one case feature selection methods work well good features randomly simulated low redundancy performances almost identical next simulate target variable vector design matrix simulate seed features seed feature simulate duplicate features consider features good features also simulate bad features therefore good bad features still relatively easy distinguished large redundancies among good features results shown table case two consistently outperforms due redundancy introduced good feature set methods outperform rrblup table performance average rrblup full set features simulated data two different cases case rrblup number mint mrmr features selected features rrblup one two real data next compare methods two maize data sets dent flint show results tables data set dent flint three phenotypes thus six phenotypes overall dent samples features flint samples features vary number selected features obvious outperform rrblup significantly indicating feature selection general able improve performance predictive model hand almost data sets mint outperforms mrmr consistently illustrating effectiveness transduction table performance average rrblup features maize dent flint number selected features data set dent rrblup features heritability pheno pheno pheno pheno pheno pheno data set flint rrblup features heritability pheno pheno pheno pheno pheno pheno conclusions work proposed transductive feature selection method mint based information theory test data integrated natural way greedy table performance feature selection data set rrblup data data data data data data lasso elastic net svr num samples feature selection process dynamic programming algorithm developed speed greedy selection experiments simulated real data show mint generally better method inductive feature selection method mrmr mint restricted genetic trait prediction problems generic feature selection model references cai feng feng kong prediction protein subcellular locations feature selection analysis protein peptide letters scott shaobing chen david donoho michael saunders atomic decomposition basis pursuit siam journal scientific computing gammerman vovk vapnik learning transduction proceedings fourteenth conference uncertainty artificial intelligence pages morgan kaufmann publishers guyon elisseeff introduction variable feature selection journal machine learning research huang shi wang feng kong cai chou analysis prediction metabolic stability proteins based sequential features subcellular locations interaction networks plos one jain zongker feature selection evaluation application small sample performance pattern analysis machine intelligence ieee transactions kizilkaya fernando garrick genomic prediction simulated multibreed purebred performance using observed fifty thousand single nucleotide polymorphism genotypes journal animal science legarra christele pascal croiseau guillaume fritz improved lasso genomic selection genetics research meuwissen hayes goddard prediction total genetic value using dense marker maps genetics trevor park george casella bayesian lasso journal american statistical association june peng long ding feature selection based mutual information criteria pattern analysis machine intelligence ieee transactions renaud rincent denis nicolas thomas altmann dominique brunel pedro revilla victor rodriguez melchinger eva bauer maximizing reliability genomic selection optimizing calibration set reference individuals comparison methods two diverse groups maize inbreds zea mays genetics robert tibshirani regression shrinkage selection via lasso journal royal statistical society series whittaker thompson denham selection using ridge regression genet res yang pedersen comparative study feature selection text categorization machine workshop pages morgan kaufmann publishers zhang ding gene selection algorithm combining relieff mrmr bmc genomics suppl
5
bayesian network learning via topological order young woong park diego klabjan college business iowa state university ames usa department industrial engineering management sciences northwestern university evanston usa aug august abstract propose mixed integer programming mip model iterative algorithms based topological orders solve optimization problems acyclic constraints directed graph proposed mip model significantly lower number constraints compared popular mip models based cycle elimination constraints triangular inequalities proposed iterative algorithms use gradient descent iterative reordering approaches respectively searching topological orders computational experiment presented gaussian bayesian network learning problem optimization problem minimizing sum squared errors regression models penalty feature network application gene network inference bioinformatics introduction directed graph directed acyclic graph dag acyclic digraph contain directed cycle paper consider generic optimization problem directed graph acyclic constraints require selected subgraph dag let consider complete digraph let number nodes digraph decision variable matrix associated arcs yjk related arc supp adjacency matrix supp yjk supp otherwise supp defined supp let collection acyclic subgraphs write optimization problem acyclic constraints supp min function acyclic constraints dag constraints appear many network structured problems maximum acyclic subgraph problem mas find subgraph maximum cardinality subgraph satisfies acyclic constraints mas written form although exact algorithms proposed superclass cubic graphs general directed graphs works focused approximations inapproximability either mas minimum feedback arc set problem fas fas directed graph subgraph creates dag arcs feedback arc set removed note mas closely related fas dual minimum fas finding feedback arc set minimum cardinality general however minimum fas solvable polynomial time special graphs planar graphs reducible flow graphs polynomial time approximation scheme developed special case minimum fas exactly one arc exists two nodes called tournament dags also extensively studied bayesian network learning given observational data features goal find true unknown underlying network nodes features selected arcs dependency relationship features create cycle literature approaches ywpark classified three categories approaches try optimize score function defined measure fitness approaches test conditional independence check existence arcs nodes iii hybrid approaches use constraint approaches although many approaches based hybrid approaches focus solving means approaches detailed discussion hybrid approaches models undirected graphs reader referred aragam zhou han estimating true network structure approach various functions used different functions give different solutions behave differently many works focus penalized least squares penalty used obtain sparse solutions popular choices penalty term include bic concave penalty lam bacchus use length score function equivalent bic chickering proposes greedy algorithm called greedy equivalence search norm penalty van geer study properties norm penalty show positive aspects using regularization raskutti uhler use variant norm use cardinality selected subgraph score function subgraphs satisfying markov assumption penalized large penalty aragam zhou introduce generalized penalty includes concave penalty develop coordinate descent algorithm han use norm penalty propose tabu search based greedy algorithm reduced arc sets neighborhood selection step choice score function optimizing score function computationally challenging number possible dags grows super exponentially number nodes learning bayesian networks also shown many heuristic algorithms developed based greedy hill climbing coordinate descent enumeration score function main focus also exist exact solution approaches based mathematical programming one natural approaches based cycle prevention constraints reviewed section model covered han benchmark algorithm mip based approach scale computational time increases drastically data size increases underlying algorithm solve larger instances baharev studied mip models minimum fas based triangle inequalities set covering models several works focused polyhedral study acyclic subgraph polytopes general mip models gotten relatively less attention due scalability issue paper propose mip model iterative algorithms based following property dags property directed graph dag topological order topological order topological sort dag linear ordering nodes graph graph contains arc appears order suppose adjacency matrix acyclic graph sorting nodes acyclic graph based topological order create lower triangular matrix row column indices lower triangular matrix topological order arc lower triangular matrix used without creating cycle considering arcs lower triangular matrix optimize without worrying create cycle advantage compared search acyclicity needs examined whenever arc added although search space topological orders large smart search strategy topological order may lead better algorithm existing search methods node orderings used bayesian network learnings based markov chain monte carlo methods alternatives network structure based approaches proposed mip assigns node orders nodes add constraints satisfy property iterative algorithms search topological order space moving one topological order another order first algorithm uses gradient find better topological order second algorithm uses historical choice arcs define score nodes proposed mip model algorithms consider gaussian bayesian network learning problem penalty sparsity discussed detail section many possible models literature pick least square model recently published work han solves problem using tabu search based greedy algorithm algorithm one latest algorithms based arc search shown scalable large score function penalized least squares convex solved standard mathematical optimization packages hence select score function han use algorithm benchmark computational experiment compare performance proposed mip model algorithms algorithm han available mip models synthetic real instances contributions summarized following consider general optimization problem acyclic constraints propose mip model iterative algorithms problem based notion topological orders proposed mip model significantly less constraints mip models literature maintaining order number variables computational experiment shows proposed mip model outperforms mip models subgraph sparse iterative algorithms based topological orders outperform subgraph dense scalable benchmark algorithm han subgraph dense section present new mip model along two mip models literature section present two iterative algorithms based different search strategies topological orders gaussian bayesian network learning problem least square introduced computational experiment presented sections respectively rest paper use following notation index set nodes index set nodes excluding node supp topological order given define denote order node example given three nodes topological order notation add arc mip formulations based topological order section present three mip models first second models denoted mipcp mipin respectively models literature similar problems acyclic constraints third model denoted mipto new model propose based property popular mathematical programming based approach solving cutting plane algorithm traveling salesman problem formulation let set possible cycles set arcs defining cycle let supp function counts number selected arcs supp solved mipcp min supp formulated mip note exponentially many constraints due cardinality therefore practical pass cycles solver instead cutting plane algorithm starts empty active cycle set iteratively adds cycles algorithm iteratively solves min supp current active set detects cycles solution adds cycles algorithm terminates cycle detected solution one drawbacks cutting plane algorithm based worst case add exponentially many constraints fact han study model concluded cutting plane algorithm scale baharev recently presented mip models minimum feedback arc set problem based linear ordering triangular inequalities acyclic constraints presented previously used cutting plane algorithms linear ordering problem write following mip model based triangular inequalities presented mipin min supp zqj zjk zqk zqj zjk zqk zjk note zjk defined instead full matrix binary variables formulation uses lower triangle matrix using fact zjk zkj also use technique mip models presented paper however ease explanation use full matrix computational experiment done reduced number binary variables therefore cutting plane algorithm mipcp scalable implementation han twice binary variables baharev also provides set covering based mip formulation idea similar mipcp set covering formulation row column represents cycle arc respectively similar mipcp existence exponentially many cycles drawback formulation baharev use cutting plane algorithm next propose mip model based property although mipin uses significantly less constraints mipcp mipin still constraints grows rapidly hand mip model propose variables constraints addition let define decision variable matrix okq otherwise following mip model mipto min supp zjk mzkj okr ojr zjk zkj okq okq unrestricted key constraint recall zjk indicates node comes first topological order okr stores exact location order definitions forces correct values zjk zkj comparing order difference recall reduce number binary variables zjk plugging zjk zkj keep full matrix notation ease explanation next show correctly solves proposition optimal solution optimal solution proof property dag corresponding topological order let topological order defined optimal solution note define topological order hence suffices show gives dag given note right hand side measures difference topological order nodes value positive implies consider one must left hand side therefore correct value completes proof table compare mip models introduced section although three mip models binary variables mipto binary variables mipcp mipin due okq mip models mipin mipto polynomially many constraints whereas mipcp exponentially many constraints mipto smallest number constraints among three mip models computational experiment use variation cutting plane algorithm mipcp exponentially many constraints mipin mipto use cutting plane algorithm name mipcp mipin mipto reference binary variables constraints exponential table number binary variables constraints mip models algorithms based topological order although mip models introduced section guarantee optimality execution time solving integer programming problem exponential problem size execution time could increase drastically models require least binary variables constraints order deal larger graphs propose iterative algorithms based property observe given topological order nodes automatically determined words easily obtain subset arcs arcs used without creating cycle let determined adjacency matrix given topological order detail set otherwise let adj function generating given input topological order given generate adj solving written min supp note acyclic constraint supp supp inequality needed try obtain sparse solution subset arcs selected among possible arcs implied long satisfy inequality forms acyclic subgraph hence different adjacency matrix supp optimal solution arc selected without creating cycle reason call adjacency candidate matrix algorithms proposed later section solve providing different adj iteration fact separable sub problems separable let columns respectively node solving min supp gives solution solving separable section local improvement algorithm given topological order presented algorithm swaps pairs nodes order iterative algorithms proposed sections use local improvement algorithm presented following section topological order swapping algorithm algorithm tries improve solution swapping topological order iteration algorithm determines nodes swap order line implies select two nodes neighbors current topological order line actual node indices detected condition line avoid meaningless computation sparse know sure swapping orders thus get different solution however still swap forced new order line create new topological order swapping nodes obtaining adjacency candidate matrix line solve worth noting separable need solve values except order difference line update best solution new solution better iterations continue improvement past iterations implies would swap nodes proceed iteration algorithm illustrated following toy example algorithm tosa topological order swapping algorithm input output best solution improvement past iterations mod node indices satisfying adj solve update end end example consider graph corresponding order nodes let assume inputs iteration hence swapping nodes since created line associated order gives improved objective function value updated line let assume lines updated iteration since executed iterative reorering algorithm propose iterative reordering algorithm based property solves iteration aiming optimize iteration algorithm nodes sorted based scores defined merit scores arcs historical choice arcs used weights iii random components sorted node order directly used topological order selected arcs topological order give updates arc weights let first define notation uniform random variable merit score arc wjk weight arc score node range uniform random variable balances randomness structured scores note determined based data characteristic problem considered larger implies arc attractive based arc merit scores score node defined wjk interpreted weighted summation multiplied perturbation random number hence nodes high scores attractive initially arcs equal weights weights updated iteration based topological order iteration adj adjacency candidate matrix iteration weights updated wjk wjk overall algorithmic framework summarized algorithm line weights wjk initialized counts number iterations without best solution update initialized also random order nodes generated corresponding solution becomes best solution iteration first node scores calculated line topological order obtained sorting nodes finally adjacency candidate matrix generated line lines solution obtained solving best solution updated available lines tosa executed current solution within certain percentage best solution lines update line updates wjk ends iteration algorithm continues converged update best solution last iterations algorithm illustrated following toy example algorithm iterative reordering input merit score termination parameter tosa execution parameter output best solution wjk random order adj solve convergent calculate score sort nodes respect adj solve update tosa update end updated else update weights end example consider graph nodes current iteration let assume given note random numbers nodes respectively line obtain corresponding order obtaining updating best solution lines weights updated follows wnew ends current iteration gradient descent algorithm section propose gradient descent algorithm based property algorithm iteratively executes moving toward improving direction gradients dag structure recovered topological order obtained projection step algorithm based standard gradient descent framework projection step takes care acyclicity constraints generating topological order current possibly cyclic solution matrix order distinguish solutions without acyclicity property use following notation decision variable matrix without acyclicity requirement iteration decision variable matrix satisfying supp iteration let step size iteration derivative weight matrix weighs element assume constant uniform infinity norm update formula updates based weighted gradient represents entrywise hadamard product two matrices given topological order define gtjk balance gradients nodes different orders small large values nodes gradients node zero gradients node nonzero weight tries adjust gap note gtjk large since may satisfy acyclic constraints order obtain dag algorithm needs solve projection problem argminy supp norm proposition arbitrary optimization problem proof recall feedback arc set maximum acyclic subgraph dual feedback arc set problem becomes weighted maximum acyclic subgraph problem therefore solving optimality guarantee optimal solution use greedy strategy solve greedy algorithm presented algorithm sequentially determines fixes topological order node iteration problem solved optimally given currently fixed nodes corresponding orders detailed derivations algorithm proof iteration optimal given already fixed node orders available appendix words show line locally optimal selects best next node given order fixed iteration line algorithm first calculates score node picks node minimum value line order selected node fixed fixed node excluded active set iterate decreased line end algorithm determine based order determined appendix illustrate algorithm following example algorithm greedy input output feasible topological order ujk end determine appendix example consider graph following nodes given algorithm returns presented algorithm starts iteration node selected based argmin set integer updated iteration node selected based argmin set integer updated iteration node selected hence node order obtain presented objective function value overall gradient descent algorithm presented algorithm line algorithm generates random order obtain corresponding save best solution iteration loop lines follow standard gradient descent algorithm weighted gradient calculated line step size determined line based ratio line solution updated based weighted gradient line greedy algorithm used obtain projected solution topological order observe directly use projected solution projected solution necessarily optimal given hence line new solution obtained based lines osa executed current solution within certain percentage current best solution lines update line copies order focus solution space near algorithm continues convergent gradient based algorithms common depend case dependency justifiable since multiply gradient next show convergence algorithm makes algorithm terminate unless converged modification line executed assume following analysis assumption element yjk iteration assume small positive number large enough number note assumption mild assumption ignoring values happens practice anyway due finite precision notational convenience let second algorithm gradient descent input parameters tosa execution parameter output best solution random order adj solve convergent defined greedy solve adj osa end updated else end term written following lemma show node orders converge lemma sufficiently large satisfying proof recall obtained solving know corresponding node order let node indices defined based words node appears first followed nodes topological order proof show change node order condition met upper bounds respectively assumed first derive last inequality holds since assumption assumption kgt natural number let consider algorithm decide node order iteration assume note currently derive ujk yjk second equality holds since yjk since arc used nodes last inequality holds due nodes derive ltjkr ujkr ltjkr yjk ltjkr yjkr ljkr pjkr yjk ltjkr ltjkr fourth line holds due assumption sixth line holds due jkr combining two results obtain ujk second inequality holds due condition result implies must line algorithm note assumption automatically holds iteratively applying derivation technique show solved identical node orders resulting solutions equivalent hence following proposition holds proposition algorithm converges estimation gaussian bayesian networks section introduce gaussian bayesian network learning problem follows form goal learn estimate unknown structure nodes graph error normally distributed network estimated optimizing score function testing conditional independence mix two described section among three categories select score based approach least square function recently studied han let data set observations features let index set observations features respectively build regression model order explain feature using subset variables words set feature response variable sparse subset explanatory variables regression model variable order obtain subset lasso penalty function added considering regression models together problem represented graph feature node graph directed arc node node represents explanatory response variable relationship node goal minimize sum penalized sse regression models selected arcs create cycle let coefficient attribute dependent variable problem written supp xik xij min follows pthe form han individual weights used penalty term wjk however computational experiment set weights equal simplicity let zjk attribute used dependent variable zjk otherwise formulate mipto min xik xij zjk zjk mzkj okr ojr zjk zkj okq okq restricted large constant note linear constraint corresponding supp similarly used formulate mipin mipcp constraints used note plays important role computational efficiency optimality small mip model guarantee optimality large solution time large enumeration algorithm getting valid value park klabjan used however valid value big multiple linear regression often large observed simple heuristic presented section works well iteration algorithm algorithm given topological order matrix adj let set selected candidate arcs dependent variable given fixed separable lasso linear regression problems min xij xik computational experiment computational experiments server two xeon cpus ram used although many papers studying bayesian network learning various error measures penalties focus minimizing lasso type objective sse penalty picked one latest paper han objective function benchmark mip models mipcp mipin mipto implemented cplex mipcp instead implementing original cutting plane algorithm use cplex lazy callback similar cutting plane algorithm instead solving optimally scratch iteration solve lazy callback allows updating adding constraints cycle prevention constraints process branch bound algorithm whenever integer solution cycles found given solution cycles detect cycles add cycle prevention constraints detected cycles mipcp mipin mipto set big follows given solve without acyclic constraints hence allowed use arcs model obtain estimated upper bound big max observed formula gives large enough big cases following experiment appendix present comparison regression coefficients implanted network dag big values result shows big value always valid cases considered compare algorithms models algorithm han denote dist algorithm starts neighborhood selection filters unattractive arcs removes consideration procedure specifically developed high dimensional variable selection much larger experiment many instances considered high dimensional dense solutions filtering arcs exists probability arc optimal solution removed hence deactivated neighborhood selection step original algorithm script original algorithm available journal website algorithms written use glmnet package function glmnet solving lasso linear regression problems use parameters use parameters start random solution perform different different random solutions since observe execution time much faster dist decided run different random seeds report best solution emphasize number different random seeds use notation rest section first test algorithms synthetic instances generated using package pcalg function randomdag used generate dag function rmvdag used generate multivariate data standard normal error distribution first dag generated randomdag function next generated dag random coefficients used create column standard normal error added rmvdag function uses linear regression underlying model obtaining data matrix package standardize column zero mean standard deviation equal one dag used generate multivariate data considered true structure true arc set may optimal solution score function random instances generated various parameters described following number features nodes number observations expected number true arcs per node expected density adjacency matrix true arcs changing ranges parameters three classes random instances generated sparse data sets expected total number true arcs controlled instances sparse true arc set use generate instances triplet yields total random instances dense data sets expected total number true arcs controlled instances dense true arc set compared sparse data sets use generate instances triplet thus total random instances high dimensional data sets instances high dimensional sparse expected total number true arcs controlled use generate instances pair yields total random instances use four values differently defined data set order cover expected number arcs four values sparse instance solve dense data sets wide range values needed obtain selected arc sets similar cardinalities true arc sets hence dense instance instead fixed values instances set use values based expected density high dimensional instance use observe expected densities adjacency matrices vary across three data sets sparse instances expected densities dense instances expected densities high dimensional instances expected densities hence different ranges values necessary results presented section present average performance example result averages instances respectively comparisons use following metrics time computation time seconds relative gap best objective value among compared algorithms models example compare three mip models mip model relative gap best three objective function values obtained mip models number arcs selected number nonzero regression coefficients comparing performance metrics use plot matrices figure multiple bar plots form matrix rows plot matrix correspond performance metrics columns stand parameters used result aggregation example left top plot figure shows execution times algorithms results aggregated number observations first row first column plot matrix figure execution times respectively section compare performance iterative algorithms benchmark algorithm dist section compare performance mip models mipcp mipin mipto also compare models algorithms subset synthetic instances section finally section solve popular real instance sachs literature comparison iterative algorithms time seconds section compare performance dist time three data sets figure result sparse data sets presented bar plot matrix presents performance measures aggregated obs nodes arcs per node penalty dist figure performance dist sparse data computation time three algorithms increases increasing decreasing computation time dist increases faster two computation time dist approximately times faster time times slower increasing computation times algorithms stay decrease seen larger instances increase time however larger number observations make predictions accurate could reduce search time unattractive subsets especially computation time dist decreases increasing think observations give better local selection algorithm adding removing arcs number selected arcs greater dist cases topological order based algorithms capable using maximum number arcs arc selection based algorithms dist struggling select many arcs without violating acyclic constraints terms solution quality algorithms less perform good however observe several trends time seconds decreases required select arcs start outperform also observe problem requires select arcs increasing increasing decreasing perform better increases dist decrease whereas increases result dense data sets presented figure bar plot matrix presents performance measures aggregated recall dense data set solve simplicity presenting aggregated result use plot matrix values used actual computation obs density solution matrix nodes penalty dist figure performance dist dense data computation time three algorithms increases increasing decreasing computation time dist increases faster two compare result sparse data sets execution times larger dense data set number selected arcs greater dist cases twice larger large small terms solution quality outperform cases values dist increase fast changing values algorithms larger sparse data sets result better cases general observe perform better problem requires select arcs result high dimensional data sets presented figure bar plot matrix presents performance measures aggregated excluded matrix fixed computation time three algorithms increases increasing decreasing however unlike previous two sets computation times increase faster dist due efficiency topological order based algorithms small portion arcs selected solution topological orders informative example consider graph three nodes assume one arc selected due large penalty case three topological orders represent selected arc third row figure shows three algorithms similar dist time seconds nodes arcs per node penalty dist figure performance dist high dimensional data much smaller values previous two data sets implies arc based search dist difficulties preventing cycles algorithm decide whether include arcs easier comparison values also show arc based search competitive although algorithms values less find clear evidence performance decreases increasing decreasing although values dist similar considering fast computing time dist recommend use dist sparse high dimensional data figure present combined results three data sets relating solution densities observe quadruplet results random instances algorithm value average results quadruplet algorithm avgden average density adjacency matrices results three algorithms figure present scatter plot avgden point plot average results algorithm algorithm points displayed numbers parenthesis along axes corresponding values avgden plot first observe algorithms perform similarly solutions sparse values large variance solutions dense log transformed solution density less average values dist respectively however solution quality dist drastically decreases solutions become denser makes sense sparse solutions efficiently searched search dense solutions easy obtain adding removing arcs one one also explains relatively small large values dense spares solutions respectively topological order based algorithms observe big difference three terms three data sets term obtained multiplying number parameters respectively log transformed log transformed average subgraph density dist figure scatter plot average solution densities comparison mip models section compare performance mipto mipin mipcp using time following additional metric optimality gap obtained cplex within allowed seconds due scalability issues models use sparse data also limit instances use seconds time limit cplex example time limit seconds instances result presented figure comparing time models time limit cplex observe mipin mipcp able terminate optimality several instances implies mipin mipcp efficient problem small number selected arcs small however general values tend consistent different models increase increasing decreasing three mip models execution times models increase increasing decreasing trend found models comparing values observe mipin best however performance mipin drops drastically increase decreases actually mipin fails obtain reasonably good solution within time limit several instances gives large values increases average values mipto smaller mipcp small mipcp outperforms comparison mip models algorithms figure compare models algorithms selected sparse instances used test mip models plot matrix show average computation time gap best objective value among six models algorithms note values mipin results fully displayed bar plots due large values instead actual numbers displayed next corresponding bar result shows mip models spent time solution qualities inferior general values mip models competitive large requires sparse solution however even case mip models spend longer time algorithms hence ignoring benefit knowing guaranteeing optimality mip models conclude algorithms perform better cases primary reason inferior performance mip models difficulty solving integer programming problems mip models least binary variables time seconds nodes arcs per node mipto mipin penalty mipcp figure performance mipto mipin mipcp sparse data constraints problem complexity grows fast finally large values big make problem even difficult due values big fathoming happen frequently branch bound procedure hence least sparse gaussian bayesian network learning mip models may best option unless complicated constraints easily dealt iterative algorithms needed real data example section study flow cytometry data set sachs solving data set studied many works including friedman shojaie michailidis zhou aragam zhou data set often used benchmark casual relationships underlying dag known cells obtained multiple experiments measurements known structure contains arcs experiment standardize column zero mean standard deviation equal one table compare performance three algorithms dist various values mip models excluded due scalability issue compare previously used performance measures execution time solution cardinalities solution quality addition also compare sensitivities true positive ratio solutions comparing known structure arcs calculate directed true positive dtp undirected true positive utp arc known structure dtp counts arc algorithm solution whereas utp counts either arcs algorithm solution solution times three algorithms within seconds small also although small observations mip models least continuous variables constraints residual terms combined complexity increment due binary variables acyclicity constraints feasible obtain reasonable solution mip models time seconds mipin arcs per node mipto nodes mipcp penalty dist figure performance models algorithms sparse data solution cardinalities similar best value among three algorithms row boldface observe provides best solution smallest cases second best values dist increase increases consistent findings section note density underlying structure dense explains good performance hand even though provides best objective function values cases dtp utp values always best highest value among three algorithms row boldface values dist largest among three algorithms dtp utp values best cases small dist tends higher dtp utp however increases gives best dtp utp values improve prediction power may need weighting features observations time dtp utp time dtp utp time dist dtp utp table performance real data set sachs table observe slight change solution quality affects final selection dag significantly also best objective function value necessarily give highest true positive value since penalized least square may best score function figure present graphs known casual interactions estimated subgraph dist graphs obtained numbers arcs different see table fact difference values dist less subgraphs common arcs figure known dag estimated subgraphs dist conclusion propose mip model iterative algorithms based topological order although computational experiment conducted gaussian bayesian network learning proposed model algorithms applicable problems following form many mip models algorithms designed based arc search using topological order provides advantages improve solution quality algorithm efficiency dag constraints acyclicity constraints automatically satisfied arcs high order nodes low order nodes used applying concept mip lower number constraints needed whereas arc based modeling exponentially many constraints worst case applying concept designing iterative algorithms one biggest merits capability utilizing maximum number arcs possible arc based algorithms struggle using possible arcs proposed mip model smallest number constraints number binary variables order already known mip models performs good cutting plane algorithm proposed iterative algorithms get biggest benefit solution matrix dense result presented section clearly indicates topological order based algorithms outperform density resulting solution high hand search algorithms represented dist experiment efficient desired solutions sparse comparing models algorithms used experiment observe mip models competitive scale well compared heuristic algorithms except small instances experiment shows solution times mip models significantly affected number nodes gaussian bayesian network learning observe large could also decrease mip model efficiency even small section among iterative algorithms recommend dist sparse high dimensional data dense data among two topological order based algorithms performs slightly better stable references aragam zhou concave penalized estimation sparse gaussian bayesian networks journal machine learning research baharev schichl neumaier exact method minimum feedback arc set problem technical report available http bolotashvili kovalev girlich new facets linear ordering polytope siam journal discrete mathematics chickering learning bayesian networks learning data pages springer chickering optimal structure identification greedy search journal machine learning research nov cook cunningham pulleyblank schrijver combinatorial optimization john wiley sons new york usa cormen leiserson rivest stein introduction algorithms mit press ellis wong learning causal bayesian network structures experimental data journal american statistical association even naor schieber sudan approximating minimum feedback sets multicuts directed graphs algorithmica fernau raible exact algorithms maximum acyclic subgraph superclass cubic graphs international workshop algorithms computation pages springer friedman hastie tibshirani sparse inverse covariance estimation graphical lasso biostatistics friedman hastie tibshirani regularization paths generalized linear models via coordinate descent journal statistical software friedman koller bayesian network structure bayesian approach structure discovery bayesian networks machine learning jan zhou learning sparse causal gaussian networks experimental intervention regularization coordinate descent journal american statistical association goemans hall strongest facets acyclic subgraph polytope unknown international conference integer programming combinatorial optimization pages springer reinelt cutting plane algorithm linear ordering problem operations research reinelt acyclic subgraph polytope mathematical programming guruswami manokaran raghavendra beating random ordering hard inapproximability maximum acyclic subgraph annual ieee symposium foundations computer science pages ieee han chen cheon zhong estimation directed acyclic graphs adaptive lasso gene network inference journal american statistical association pages hassin rubinstein approximations maximum acyclic subgraph problem information processing letters heckerman geiger chickering learning bayesian networks combination knowledge statistical data machine learning kaas branch bound algorithm acyclic subgraph problem european journal operational research kalisch colombo maathuis causal inference using graphical models package pcalg journal statistical software karp reducibility among combinatorial problems springer schudy rank errors proceedings annual acm symposium theory computing pages acm lam bacchus learning bayesian belief networks approach based mdl principle computational intelligence leung lee facets fences linear ordering acyclic subgraph polytopes discrete applied mathematics lucchesi younger minimax theorem directed graphs journal london mathematical society mitchell borchers solving linear ordering problems combined interior cutting plane algorithm high performance optimization pages springer parviainen koivisto structure discovery bayesian networks sampling partial orders journal machine learning research park klabjan subset selection multiple linear regression via optimization technical report available http core team language environment statistical computing foundation statistical computing vienna austria ramachandran finding minimum feedback arc set reducible flow graphs journal algorithms raskutti uhler learning directed acyclic graphs based sparsest permutations arxiv preprint sachs perez lauffenburger nolan causal networks derived multiparameter data science shojaie michailidis penalized likelihood methods estimation sparse directed acyclic graphs biometrika van geer maximum likelihood sparse directed acyclic graphs annals statistics appendix greedy algorithm projection problem section present detail derivations proofs algorithm greedy algorithm sequentially determines topological order optimizing projection problem given already fixed order iteration point solving projection problem algorithm may give global optimal solution however result section shows algorithm gives optimal choice next node fixed order given orders start describing properties following three lemmas following lemmas let represent topological order node defined lemma yjk must either yjk yjk ujk proof contradiction let assume exist indices yqr yqr uqr let create new solution except uqr note feasible solution supp supp since yqr yqr uqr uqr except yqr contradicts optimality note lemma implies solving essentially choosing ujk yjk selection also based following property ujk lemma yjk proof contradiction let assume exist indices yqr uqr let create new solution except uqr yqr dag since supp supp yqr arc used solution without creating cycle hence dag therefore feasible solution however easy see yqr uqr uqr contradicts optimality given topological order let subset nodes earlier node combining lemmas conclude following structure ujk yjk calculate node contribution objective function value without explicitly using lemma node contributes ujk objective function value words contribution node squared sum ujk nodes proof node derive yjk ujk yjk ujk ujk equal signs due first equality due yjk ujk second equality holds since yjk next detail derivation greedy algorithm presented algorithm let index set nodes index set nodes already ordered procedure equivalent iteratively solving min yjk ujk ymk column corresponding node set updated solving propose algorithm solve based solve properties described lemmas given ujk next show solving gives optimal solution actually replicate properties lemma optimal solution must either ujk lemma optimal solution must ujk lemma optimal solution must satisfy ujk proofs omitted similar proofs lemmas respectively note lemma show equivalence ujk ujk result also holds term argmin function hence following lemma holds lemma solving equivalent solving observe used line greedy algorithm algorithm hence property algorithm gives optimal choice node fixed topological order summary statistics maximum coefficients section show heuristic formula selecting big gives reasonable large enough values note creating synthetic instance used random dag generate multivariate data although optimal dag penalized least squares unknown appropriate penalty constants use implanted dag obtain coefficients estimation calculate max using implanted dag value compared big value instances used mip models sparse data calculate minimum average maximum result presented table note columns min avg max rows although summary statistics presented table observed greater cases considered param value min avg max min avg max table comparison maximum coefficients big sparse data
8
consideration publication theory practice logic programming dec tchr framework tabled clp tom bart demoen dept computer science belgium toms bmd david warren dept computer science state university new york stony brook usa warren submitted september revised july accepted dec abstract tabled constraint logic programming powerful execution mechanism dealing constraint logic programming without worrying fixpoint computation various applications fields program analysis model checking proposed unfortunately system developing new applications lacking programmers forced resort complicated hoc solutions papers presents tchr framework tabled constraint logic programming integrates manner constraint handling rules chr language constraint solvers tabled logic programming framework easily instantiated new constraint domains various operations instantiated control performance particular propose novel generalized technique compacting answer sets keywords constraint logic programming constraint handling rules tabled execution introduction notion tabled constraint logic programming clp originates constraint databases community kanellakis ordinary database data stored relations atomic values constraint databases generalize atomic values constraint variables field restricted range values rather single value allows compact representations explicitly enumerating atomic values datalog formalism reasoning ordinary databases queries particular generalized datalogd purpose datalog restricted form logic programming datalogd restricted form constraint logic programming restrictions enforce programs finite interpretations research assistant fund scientific research flanders belgium vlaanderen schrijvers due finiteness properties queries datalogd programs resolved computation rather usual computation clp former advantage terminates datalogd programs whereas latter may get stuck infinite loops however goaldirected approach usually obtains desired result much faster uses less space reason toman toman proposed compromise tabling tabling technique improving termination properties approach memoization intermediate results generalizing tabled datalogd tabled clp benefit generalized expressivity clp improved termination properties tabling number different applications proposed tabled datalogd tabled clp toman considers alternative approach implementing abstract interpretation toman constraints abstract concrete values tabling takes care fixpoints various applications context model checking developed mukund pemmasani constraints impose restrictions parameters parametrized models tabling takes care cycles model graphs establishes clear need tabled clp let consider availability tabled clp systems turns comprehensive system developing new tabled clp applications missing completely model checking applications mukund pemmasani adapted existing tabled logic programming system xsb warren constraint programming facilities various hoc laborious ways first xsb developers resorted interfacing foreign language libraries implementing constraint solvers xsb close coupling constraint solver application consequence instance initial feasibility study model checking system used meta interpreter written xsb deal constraints see mukund subsequent full system implements interface xsb poline constraint solver library passes around handles constraint store xsb program see later stage real time model checking application used distance bound matrices implemented xsb see pemmasani attempt facilitate use constraints xsb extended attributed variables cui warren attributed variables holzbaur prolog language feature widely used implementing constraint solvers allows associate data unbound variables manipulate also interrupt unification variables unfortunately constraint solvers complex programs even attributed variables daunting task implement order substantially lower threshold tabled clp formalism needed writing new constraint solvers integrating tabled logic programming system work present formalism tabled constraint handling rules tchr short tchr framework developing new constraint solvers tabled tchr framework tabled clp logic programming environment integrates constraint handling rules chr established formalism writing new constraint solvers tabled logic programming framework offers number default operations specialized instantiations control semantics performance practical implementation framework presented integration chr xsb integration shows tabled constraint logic programming system obtained constraint logic programming tabled logic programming system little impact either although chosen xsb particular tabled logic programming system believe ideas readily apply systems summary major contributions work framework developing new constraint solvers tabled logic programming system practical implementation framework terms chr xsb novel generalized approach answer set reduction integration believe combines topdown fixpoint computations superior termination properties xsb constraint programming capabilities chr combined power enables programmers easily write highly declarative programs easy maintain extend overview rest text structured follows first sections provide basic technical background knowledge tabled execution constraint logic programs constraint handling rules respectively section outlines contribution framework tabled clp system integrated terms slg constraint handling rules subsequent sections discuss detail different options operations framework call abstraction section answer projection section answer set optimization section finally section discusses related possible future work section concludes first end introduction small motivating example domain model checking example systems wolper manipulate data variables unbounded domains finite number control locations systems modeled extended finite automata ramakrishnan finite automata guards transitions variable mapping relations source destination locations useful modelling subsequently checking buffers protocols simple example system modeled clp edge edge edge schrijvers system three control locations one variable respectively clause represents edge system possible transition one control location source another destination inequality constraint clause guards transition equality constraint relates source variable destination variable variable mapping suppose interested whether location reachable location values parameter let define reachability predicate reach reach edge reach reachability question captured query reach order answer query tabling required avoid trap cycle graph time constraints allow compact symbolic representation infinite search space without good interaction tabling constraints would able obtain concise solution little effort tabled constraint logic programs section cover basics tabled constraint logic programming first syntax constraint logic programs presented section next section explains constraints part clp constraint domain finally section presents operational semantics slgd tabled constraint logic programs syntax constraint logic programs constraint logic program consists number rules called clauses form atom constraint literals literal either atom negated atom called head clause called body comma called conjunction corresponds logical conjunction semantics constraint logic programs atoms constructed predicate symbols variables meaning defined constraint logic programming syntax semantics constraints defined constraint domain see section literals body positive clause definite clause normal clause clause may also contain negative literals definite constraint logic program consists definite clauses normal constraint logic program tchr framework tabled clp normal clauses one consider definite programs address programs short constraint domains constraint solver partial executable implementation constraint domain constraint domain consists set constraint symbols logical theory every constraint symbol tuple value sets primitive constraint constructed constraint symbol every argument position either variable value corresponding value set similar way atom constructed logic program constraint form primitive constraints two distinct constraints true false former always holds latter never holds empty conjunction constraints written true logical theory determines constraints hold constraints hold typically use also refer specifically example means logical theory constraint domain constraint holds valuation constraint variable substitution maps variables vars onto values constraint domain valuation solution holds constraint domain constraint satisfiable solution otherwise unsatisfiable two constraints equivalent denoted solutions constraint domain particular interest herbrand domain constraint symbol term equality ranges herbrand terms plain logic programming seen specialized form constraint logic programming herbrand domain two problems associated constraint solution problem determining particular solution satisfaction problem determining whether exists least one solution algorithm determining satisfiability constraint called constraint solver often solution produced general technique used many constraint solvers repeatedly rewrite constraint equivalent constraint solved form obtained constraint solved form property clear whether satisfiable see marriott stuckey extensive introduction constraint solvers semantics constraint logic programs survey constraint logic programming clp jaffar maher various forms semantics listed constraint logic programs logic semantics based clark completion clark fixpoint semantics jaffar lassez well new framework operational semantics schrijvers clp fixpoint semantics defined usual way fixpoint extended immediate consequence operator definition clp immediate consequence operator consequence function tpd clp program constraint domain defined valuation execution strategy slgd using tabling clp fixpoint semantics developed toman toman slgd semantics encompasses best operational semantics like evaluation favorable termination properties like evaluation basic slgd semantics slgd semantics makes two assumptions constraint domain firstly includes projection operation returns disjunction constraints used state one notation juncts disjunction secondly assumed relation provided relation least strong implication slgd formulated terms four resolution rewriting rules listed table rules either expand existing tree nodes create new root nodes four different kinds tree nodes root body goal ans atom literals slgd tree built root node using resolution rules slgd forest set slgd trees meaning different resolution rules following clause resolution rule expands root node every matching clause head body node created containing clause body literals least one literal body node expanded query projection rule goal nodes rule selects literal resolved given query projection rule implements selection strategy common systems including xsb however strategy valid well current constraint store projected onto selected literal variables yielding constraints relevant also known constraint stores tchr framework tabled clp literal projection yields disjunction constraints one goal node created every disjunct literal body node expanded answer projection rule number answer nodes purpose current constraint store projected onto goal variables retaining constraints relevant goal way variables local chosen clause body eliminated goal node expanded new body nodes answer propagation rule rule substitutes selected literal answers selected literal answer constraint stores incorporated current store note answer propagation answer projection rules cooperate whenever new answer produced propagated nodes already resolved using answers tree also answer propagation rule responsible creating new slgd trees tree root node subsumes goal resolved found node root created start separate tree finally query slgd formalism tuple vars vars arguments variables slgd resolution rules used query evaluation follows create slgd forest containing single tree root expand leftmost node using resolution rules long applied return set ans answers query definition answer set answer set ans set ans slg slg slgd tree rooted root example let consider following simple clp program true constraint domain domain integers supported basic constraint equality arithmetic expressions figure depicts slgd forest query true full arrows represent slgd tree branches whereas dashed arrow indicates start new tree dotted arrows indicate propagation new answers arrow labeled step number answer set ans true consists two answers schrijvers parent children conditions clause resolution body root body bkl bki satisfiable query projection goal body goal answer propagation body goal body ans satisfiable answer projection ans body ans table slgd resolution rules parent children optimized query projection goal body goal conditions table optimized query projection slgd resolution slgd optimizations several optimizations rewriting formulas proposed toman one query projection particular interest optimization allows general goals strictly necessary resolved way fewer goals resolved distinct specific queries covered general goal table lists modified query projection rule called optimized query projection second important optimization modified version answer set definition tchr framework tabled clp root true tttt ttt tttt tttt ggg gggg gggg ggg body body goal true ans wwwww www wwww wwww body body apj ans ans root true body true goal true true gggg gggg gggg ggg body tttt ttt tttt tttt body ans ans fig example slgd forest definition optimized answer set optimized answer set query denoted ans set ans slg already ans alternative definition allows answers omitted already entailed earlier general answers logically answers entailed set answers smaller new definition note slg operational semantics tabled logic programming fact specialized form slgd semantics herbrand domain several implementations slg exist including xsb topic paper integration chr tabled execution effect implementation slgd arbitrary defined chr program toman toman also extended work execution schrijvers strategy clp programs negation extension realizes semantics implementation extension covered work imposes additional requirements constraint solver finite representation negation constraint exist moreover detection loops negation requires complicated tabling mechanism constraint handling rules section give brief overview constraint handling rules chr abdennadher syntax chr use two disjoint sets predicate symbols two different kinds constraints constraint symbols solved given constraint solver chr constraint symbols defined rules chr program three kinds rules simplification rule propagation rule simpagation rule ame ame ame name optional unique identifier rule head conjunction chr constraints guard conjunction constraints body goal query conjunction chr constraints trivial guard expression true omitted rule head simplification rule called removed head rule replaces head body similarly head propagation rule called kept head rule adds body presence head simpagation rules abbreviate simplification rules form ame kept head removed head chr program consists ordered set chr rules operational semantics chr formal operational semantics chr given terms state transition system figure program state indexed first part tuple goal multiset constraints rewritten solved form chr constraint store multiset identified chr constraints matched rules program identified chr constraint chr constraint associated unique integer constraint identifier number serves differentiate among copies constraint introduce functions chr extend sequences sets multisets identified chr constraints obvious manner chr constraint store conjunction constraints tchr framework tabled clp solve constraint introduce chr constraint apply exists renamed apart rule form matching substitution chr chr result hold fig transition rules operational semantics chr passed underlying solver since usually information internal representation model abstract logical conjunction constraints propagation history set sequences recording identities chr constraints fired rule name rule necessary prevent trivial propagation rules propagation rule allowed fire set constraints constraints used fire rule finally counter represents next free integer used number chr constraint given initial query initial program state true rules program applied exhaustion initial program state rule applicable head constraints matched constraints current chr store matching guard rule implied constraints goal applicable rules applied application undone contrast prolog simplification rule applied matched constraints current chr store replaced body rule propagation rule applied body rule added goal without removing constraints implementation chr description operational semantics chr leaves two main sources order constraints query processed order rules prolog almost chr implementations execute queries left right apply rules textual order program behavior formalized refined semantics also proven concretization standard operational semantics duck nondeterminism due order delayed constraints multiple matches rule relevance programs discussed schrijvers refined semantics actual implementations chr constraint query understood procedure goes efficiently rules program order written matches head constraint rule look partner constraints head constraint store check guard applicable rule found consider constraint active active constraint removed trying rules put constraint store constraints store reconsidered woken newly added constraints constrain variables constraint rules may become applicable guards implied refined operational semantics implemented major chr systems among chr system schrijvers demoen system currently available three different prolog systems hprolog demoen wielemaker xsb serves basis integration tabled execution paper chr system schrijvers demoen based general compilation schema chr holzbaur holzbaur paper section relevant know chr constraint store implemented global updatable term containing identified constraints context also called suspended constraints grouped functor suspended constraint represented suspension term including following information constraint constraint identifier continuation goal executed reactivation goal contains suspension argument fact cyclic term part propagation history containing propagation rule tuple identifiers constraints constraint interacted variables involved suspended constraints behave indexes global store suspensions attached attributes aim towards integration chr tabled logic programming question established representation properties consider something cope refer interest reader schrijvers details chr implementation chr constraint solving chr language intended language implementing constraint solvers chr program constraint solver constraint domain whose constraint symbols chr constraint symbols constraint theory program consists constraint theory together declarative meaning chr rules declarative meaning simplification tchr framework tabled clp rule form vars vars vars vars similarly declarative meaning propagation rule form value sets explicitly defined chr program exist implicitly intention programmer see abdennadher extensive treatment chr writing constraint solvers tchr framework main challenge introducing chr xsb integration chr constraint solvers backward chaining fixpoint computation slg resolution according slgd semantics previous section similar integration problem solved cui warren describes framework constraint solvers written attributed variables xsb name tabled constraint logic programming tclp coined publication though formulated terms slgd resolution porting chr xsb already recognized important future work chr much convenient developing constraint solvers attributed variables nature advantage carried tabled context making tabled chr convenient paradigm tclp attributed variables indeed show internal details presented current section hidden user cui warren general tclp framework specifies three operations control tabling constraints call abstraction entailment checking answers answer projection operations correspond optimization query projection projection answer projection compaction ans set left constraint solver programmer implement operations particular solver following formulate operations terms chr operations covered significant detail actual chr implementation encoding global chr constraint store taken account general scheme tchr implementation objective tchr implement slgd semantics arbitrary constraint domain implemented chr constraint solver purpose slgh slg implementation xsb sldd implementation chr implementation xsb disposal slg special case slgd herbrand constraint domain schrijvers hence aim simplest least intrusive solution use unmodified chr implementation constraint solving use unmodified slg implementation tabled execution intersection point point transform back forth chr constraints encoding constraints advantages lightweight approach twofold firstly straightforward realize full expressivity slgd within existing system secondly affect existing programs performance downside note tchr performance particular constant factors involved optimal however chr aim towards performance first place rather towards highly expressive formalism experimenting new constraint solvers similarly see tchr framework highly expressive prototyping system exploring new applications offer means affect performance resulting performance simply good enough one may decide reimplement established approach language look solution detail points leave system untouched consider implementing point translating constraint encodings first let consider different kinds nodes used slgd tree nodes root answer nodes manifestly represented slgh implementations like xsb respectively call answer tables hence two nodes require constraint store encoding form two nodes goal body nodes implicit execution mechanism free use form suits best formats nodes mind consider one one different resolution rules clause resolution rule depicted constraint annotated type encoding herbrand encoding chr natural chr encoding constraint store initially herbrand encoded root node decoded natural chr form solving chr solver chr solver either fails conjunction satisfiable returns simplified form conjunction body cchr dchr root body bkl cchr dchr dchr bki cchr dchr satisfiable optimized query projection rule directly starts constraint store natural chr constraint form projects onto first literal subsequently generalizes chr program normally come combined projection generalization operation one supplied tchr framework call abstraction section discusses kind generic projection operation tchr framework implements tchr framework tabled clp goal cchr cchr body cchr goal cchr cchr answer propagation answers consumed rule decoded herbrand form implication check satisfiability check chr program normally come implication check one supplied tchr framework covered together call abstraction section body cchr goal cchr cchr body cchr alchr ans cchr cchr cchr satisfiable answer projection projection performed chr constraint representation instance projection call answer projection like answer projection supplied framework section details operation within framework elaborated ans body cchr aichr ans alh established new operations mappings include framework consider incorporated existing slgh system xsb recall intend modify system incorporate projection operations order keep integration neither want encumber programmer tedious rather task instead propose automatic transformation based simple declaration introduce operations transformation maps slgd program onto slgh program mapping every predicate considered independently mapped onto three predicates tabled original maps slgd program onto slgh program outline transformation single predicate table body three resulting predicates currentstoreencoding currentstoreencoding abstractstoreencoding abstractstoreencoding answerstoreencoding currentstoreencoding answerstoreencoding schrijvers table storeencoding nstoreencoding storeencoding project nstoreencoding body new predicate front actual tabled predicate tabled front allows predicate called old calling convention constraint store implicit natural chr form thanks front transformation modular modify existing calls predicate either predicates bodies body body queries auxiliary predicate encode returns herbrand encoding current implicit constraint store call predicate projects herbrand encoded store onto call arguments implicit constraint store emptied empty interfere tabled call herbrand encoded stores manifest answers finally predicate decode decodes herbrand encoding adds resulting chr constraint store implicit store predicate called twice first restore current constraint store add answer constraint store tabled call tabled predicate tabled predicate body encoded input store decoded original predicate code original run resulting store encoded projected onto call arguments implicit store emptied interfere caller note outline mapping practice scheme specialized concrete operations discusses later discuss operations detail transformation predicates queries fully transparent user indicate predicates tabled add declaration form chr options meaning predicate tabled first argument ordinary prolog term second argument chr constraint variable optional list additional options options may provided control transformation encoding encodingtype tchr framework tabled clp section studies two alternative encodings herbrand constraint store option allows user choose projection predname projection applied answer projection rule addressed section projection realized call projection predicate reduces constraint store projected form canonical form predname answer combination predname two options relate optimizations answer set based definition novel generalization principle discussed section figure summarizes different steps handling call tabled predicate call abstract call table project answer execute call yes yes store table yes entailed answers combine answer call fig tabled call flowchart herbrand constraint store encodings section present two alternative herbrand constraint store encodings encoding must following properties encoding suitable passing argument predicate storing answer table possible convert natural chr constraint form see section back insertion call table retrieval answer table essential aspects ordinary chr constraint store implementation covered section two different herbrand constraint store encodings schrijvers based ordinary form explored suspension encoding goal encoding former based state copying latter recomputation discussion respective merits weaknesses well evaluation follow sections respectively one implicit aspect chr execution refined operational semantics order constraints processed ordering information maintained explicitly without additional support straightforward maintain ordering information tabled constraints however spirit tabling declarative meaning program rather operational behavior importance reason shall attempt realize ordering refined operational semantics user point view chr constraints behave according theoretical operational semantics assumptions made ordering suspension encoding encoding aims keeping tabled encoding close possible ordinary form essential issue retain propagation history constraints way unnecessary propagation rules occurs constraints retrieved table however possible store ordinary constraint suspensions table fortunately attributed variables stored tables see cui warren two aspects taken account firstly suspensions cyclic terms tables handle dealt breaking cycles upon encoding resetting decoding secondly constraint identifiers replaced fresh ones decoding multiple calls would otherwise create multiple copies constraints identical identifiers finally decoding constraints activated order solve together already present constraints done simply calling continuation goals example let consider following program constraint query fired rule suspension constraint looks like suspension reactivate identifier reactivate continuation goal propagation history recorded rule fired tchr framework tabled clp constraint suspension store would constraint suspension reactivate suspension encoding store two constraints would look like suspension suspension upon decoding simply unify fresh identifiers corresponding suspension terms resulting suspension terms placed implicit chr constraint store finally continuation goals suspensions called goal encoding goal encoding aims keeping information table simple form possible suspended constraint goal impose constraint retained table easy create goal suspension easy merge goal back another constraint store needs called whenever necessary goal creates suspension fresh unique identifier inserts constraint store information lost encoding propagation history may lead multiple propagations combination head constraints sound restriction chr rules required behave according set semantics presence multiple identical constraints lead different answers modulo identical constraints example goal encoding example decoding procedure simply calls evaluation measure relative performance two presented encodings consider following two programs prop simp constraints constraints true true schrijvers table evaluation two tabled store encodings program prop simp tabling runtime space encoding suspension runtime space encoding goal runtime space programs predicate puts constraints constraint store prop program uses propagation rule achieve simp program uses auxiliary constraint version query time complexity simp prop program two possible encodings answer constraint store specified tabling declaration follows encoding suspension encoding goal table gives results query untabled tabled using two encodings runtime milliseconds space usage tables bytes programs answer table contains constraint store constraints space overhead due difference encoding suspension contains information simple call however difference constant factor part suspension general size greater propagation history prop program every constraint history limited remembering propagation rule used simp program propagation history always empty runtime prop version suspension encoding considerably better version goal encoding fact complexity difference answer retrieved table suspension encoding propagation history prevents hence answer retrieval takes time however goal encoding every constraint answer start propagating complexity answer retrieval becomes hand simp propagation history plays role runtime overhead mostly due additional overhead suspension encoding opposed simpler form goal encoding comparison without tabling query takes milliseconds programs call abstraction call abstraction operation combine projection generalization operations optimized query projection rule slgd semantics idea steps reduce number distinct slgd trees hence number tables predicate called many different call patterns tchr framework tabled clp table generated call pattern thus possible information one strongly constrained call present many times tables different less constrained call patterns duplication tables avoided using call abstraction obtain smaller set call patterns projection reduces context predicate call constraint store constraints relevant call way two calls respectively constraint stores yield projected call store subsequent generalization step goes even relaxing bounds reference value constraint stores become hence call abstraction effectively means control number tables level slgh call abstraction means passing certain bindings call example abstracted goal followed ensure appropriate bindings retained slgd call abstraction generalized bindings constraints abstraction means removing constraints arguments consider example call constraint call abstracted followed reintroduce constraint abstraction particularly useful constraint solvers number constraints variable much larger number different bindings variable consider example finite domain constraint solver constraint first argument variable second argument set possible values variable domain size contains different values variable take many different constraints one subset values thus many different tables would needed cover every possible call pattern varying degrees abstraction possible depending particular constraint system application full constraint abstraction removal constraints call generally option chr following reasons chr rules require constraints variables exclusively ground terms atoms well useful various reasons encoding constraint variables ground terms particular solving algorithms used conveniently efficiently equation solving algorithm optimal using ground elements schrijvers straightforward automatically define abstraction ground terms necessarily passed arguments well created inside call hence explicit link call environment link needed call abstraction abstraction full constraint abstraction seem suitable chr full constraint abstraction preferable previously mentioned table likely order reuse existing answers existing calls considered answer schrijvers propagation rule previous calls compared new call using implication check unfortunately implication check come chr solver special case tabling taken slgh terminology tabling unfortunately even establishing equivalence constraint stores directly supported chr solvers however call constraint store empty true problem disappears true implies true independent constraint domain moreover may costly sort constraints passed call abstracted away hence often full abstraction cheaper partial abstraction instance consider typical finite domain constraint solver binary constraints constraint graph number finite domain constraints node every variable involved constraint edge variables involved constraint additional constraint imposed variable component graph may affect domain variables component hence call abstraction subset variables involves costly transitive closure reachability constraint graph let revisit transformation scheme section predicate specialize full call abstraction storeencoding nstoreencoding storeencoding nstoreencoding table nstoreencoding project nstoreencoding body know constraint store empty longer need pass argument tabled predicate decode effect call abstraction replace constraint variable fresh variable necessary prevent constraints reachable attributes unification end specialization substitution appears answer propagation rule tchr framework tabled clp answer projection constraint domains logical answer represented many different ways example consider predicate represent answer call concerning constraints relate call arguments like meaningless outside call local variable existentially quantified constrained introduce unsatisfiability later stage purpose projection restrict constraints set variables interest eliminate variables much possible setting variables interest call arguments projection sound already present yet detected unsatisfiability removed sufficient necessary condition constraint system complete unsatisfiability detected immediately projection important context tabling may give logically equivalent answers syntactical form two answers syntactical form recognized duplicates one retained table vital application projection predicate infinite number different answers may turned one finite number answers discarding constraints local variables example consider program path edge path path path edge leq leq leq leq leq leq leq leq leq true true leq defines predicate expresses reachability graph represented predicates first two arguments predicates edges origin destination third constraint variable along every edge graph additional constraints may imposed variable example graph consists single loop edge loop imposes two constraints leq leq variable local variable fourth rule derives leq also holds schrijvers query path determines different paths infinite number paths simple graph one integer path takes loop times every time loop taken new variable created two constraints leq leq added propagation rule also leq added time loop taken second simpagation rule however removes one copy last constraint even though infinite number answers constraints involving local variables interest single leq relevant general constraint projection onto set variables transforms constraint store another constraint store variables given set involved form resulting constraint store strongly depends particular constraint solver computation may involve arbitrary analysis original constraint store propose believe elegant approach projection consists compact high level notation user declares use approach projection follows chr projection redn ame implements projection number chr rules involve special redn constraint constraint argument set variables project transformation generates predicate tabled based declaration table nstoreencoding redn ame nstoreencoding projection operation supplied default action return constraint store unmodified implement projection simpagation rules used decide constraints remove final simplification rule end used remove projection constraint store following example shows project away constraints involve arguments contained given set vars project vars leq member vars member vars true project vars true besides removal constraints sophisticated operations weakening possible consider set solver two constraints requires tchr framework tabled clp element set requires set rules projection could include following weakening rule project vars elem set member set vars member elem vars nonempty set answer set optimization section consider various ways reducing size answer set first section consider technique proposed toman leads sidetrack section outline technique dynamic programming answer subsumption section continues main story established answer subsumption suboptimal general constraint domains generalized approach proposed instead section relax soundness condition answer reduction speculate applications program analysis finally section evaluate two main approaches answer subsumption answers computed tabled predicate may redundant need saved property exploited definition optimized answer set definition terms slgh consider example answer already table predicate new answer found new answer redundant covered general already table hence logically valid record answer table simply discard affect soundness completeness procedure extend idea answer subsumption chr constraints path length computation serve illustration example dist edge leq dist dist edge leq suppose appropriate rules constraint program leq means semantics dist holds path length less equal words upper bound length path answer dist leq already table new answer dist leq found new answer redundant hence discarded affect soundness since logically answers covered strategy establishing implication provided following property schrijvers logical formulas particular consider previous answer constraint store newly computed one strategy follows end tabled predicate execution previous answer store merged new answer store merging store simplified propagated available rules chr program combines two answers new one mechanism used check entailment one answer combined answer store equal one two answer store entails practical procedure following table nstoreencoding project storeencoding prevstoreencoding answerid prevstoreencoding conjunction conjunction prevstoreencoding answerid fail conjunction storeencoding fail nstoreencoding storeencoding computing projecting herbrand encoding new answer store look previous answer stores assume predicate previous purpose backtracks previous answers also provides handle answerid returned answer previous answer store still herbrand encoding decode simultaneous effect adding new implicit chr constraint store still place computes resulting conjunction herbrand encoded comparison syntactical equality used sound approximation equivalence check first equivalence sign formula conjunction equals previous answer prevstoreencoding previous answer implied new answer hence obsolete use predicate del answer erase answer table backtrack alternative previous answers otherwise conjunction equal new answer neither implies also backtrack alternative previous answers however conjunction equals new answer storeencoding means implied tchr framework tabled clp previous answer hence fail ignoring alternative previous answers hand resulting answer implied previous answers genuinely new answer stored answer table example consider example assume answer stores leq leq leq successively produced query dist first answer leq produced previous answers makes way answer table second answer leq already previous answer leq conjoined following rule rule simplifies conjunction retain general answer leq leq true hence resulting solved form conjunction leq words previous answer words previous answer implied new answer deleted answer table new answer recorded finally following procedure discover third answer already implied second one final answer set contains second answer note program would normally generate infinite number answers cyclic graph logically correct terminating however tabled answer subsumption terminate weights terminate produces one answer namely dist leq length shortest path indeed predicate returns optimal answer syntactical equality check herbrand encoding general approximation proper equivalence check option table chr declarations allows improve effectiveness canonical form predname specifies name predicate compute approximate canonical form herbrand encoded answer constraint store canonical form used check equivalence two constraint stores example leq leq leq leq permutations herbrand constraint store encoding obviously based simple syntactic equality check different however reduced canonical form help prolog refer schrijvers elaborated discussion property alternative elaborate implementation implication checking strategy chr contrast generic approach traditional approach clp solver provide number predefined ask constraints saraswat rinard subsumption checks primitive constraints primitive ask constraints combined form complicated subsumption checks duck avoided approach puts greater burden constraint solver implementer provide implementation primitive ask schrijvers straints future work could incorporate ask constraints generic approach greater programmer control performance accuracy subsumption tests dynamic programming answer subsumption technique used program replace computation exact distance path computation upper bound distance via constraints tabling predicate performing answer subsumption defining predicate effectively turned optimizing one computing length shortest path straightforward yet powerful optimization technique applied defining predicates well turning optimizing dynamic programming predicates minimum changes comparison usual approach consists explicitly computing list answers using prolog processing list answers guo gupta guo gupta added specific feature tabled execution realize dynamic programming functionality adding support chr tabling get functionality free general answer compaction definition yields sound approach reducing size answer tables however discovered special case really possible therefore propose following generalized definition answer sets compacted answer set covers sound approaches reducing answer set size definition compacted answer set compacted answer set query denoted ans set new fully instantiated ground answers introduced ans ans slg fully instantiated answers covered ans slg ans answer set compact individual answers ans slg constraint stores valuations note optimized answer set special instance compacted answer set certainly herbrand constraints optimal strategy tchr framework tabled clp conjunctions herbrand equality constraints words finding single herbrand constraint covers two given ones sufficient considering two unfortunately similar property hold constraint domains single constraint store may equivalent disjunction two others equivalent either two example leq leq true yet neither leq true leq true nevertheless checking whether one answer subsumes rather convenient strategy since require knowledge particularities used constraint solver makes good choice default strategy chr answer subsumption better strategies may supplied particular constraint solvers option answer combination predname specifies name predicate returns disjunction two given answer stores fails find one example consider simple solver featuring constraints form constraint variable integers rules solver subsumption approach merges two constraints iff however fails work nevertheless single constraint form covers optimal answer combinator case one returns union two overlapping intervals also captures subsumption approach intervals overlap single constraint covers without introducing new answers note idea general answer compaction specific implementation constraints particular apply constraint solvers relaxed answer compaction semantics applications soundness condition answer generalization relaxed example regular prolog would two answers replace two one answer guarantees positive programs answers lost may introduce extraneous answers words property preserved property similar technique possible constrained answers approach logically unsound may acceptable applications answer coverage required example use least upper bound lub operator combine answers tabled abstract interpretation setting codish often schrijvers accuracy efficiency space time exploiting abstract interpretation remain feasible many circumstances toman explored toman use clp program analysis compared abstract interpretation proposal constraints serve abstractions concrete values computation tabling necessary reach fixpoint recursive program constructs notes clp approach less flexible actual abstract interpretation lacks flexible control believe proposal relaxed answer compaction could function lub widening operator remedy issue making toman program analysis technique practical remains explored future work evaluation shipment problem evaluate usefulness two proposed answer set optimization approaches based shipment problem problem statement packages available shipping using trucks package weight constraints time delivered truck maximum load destination determine whether subset packages fully load truck destined certain place packages subset delivered time cui problem solved truckload program truckload program constraints leq leq leq leq leq leq leq leq leq leq true number number number number true number number true true leq truckload truckload truckload truckload pack truckload include pack include pack tchr framework tabled clp pack chicago leq leq pack chicago leq leq pack chicago leq leq pack chicago leq leq pack chicago leq leq pack chicago leq leq packages represented constraint database clauses pack chicago leq leq means third package weights pounds destined chicago delivered day predicate computes answer problem truckload chicago computes whether subset packages numbered exists fill truck maximum load pounds destined chicago time constraints captured bound constraint variable may multiple answers query multiple subsets exist satisfy run program four different modes tabling program run without tabling tabling plain avoid recomputation subproblems recursive calls predicate tabled truckload chr encoding goal tabling sorted answer store canonicalized simple sorting permutations detected identical answers truckload chr encoding goal sort tabling combinator apply custom answer combinator proposed example two answers overlapping time intervals merged one answer union time intervals variant declared truckload chr encoding goal interval custom answer combinator table contains runtime results running program four different modes different maximum loads runtime milliseconds obtained intel pentium ghz ram xsb running linux modes tabling space usage kilobytes schrijvers tabling load min plain tabling sorted combinator table runtime results truckload program load plain tabling sorted combinator table space usage truckload program tables number unique answers recorded well table table respectively clear results tabling overhead small loads scales much better modes canonical form answer combination slight space advantage plain tabling increases total number answers hardly runtime effect canonical form whereas answer combination mode faster increasing load summary canonicalization answer store answer combination favorable impact runtime table space depending particular problem related future work theoretical background paper slgd resolution realized toman toman toman establishes soundness completeness termination properties particular classes constraint domains implemented prototype implementation slgd resolution evaluation practical implementation prolog system done load plain tabling sorted combinator table number tabled answers truckload program tchr framework tabled clp various hoc approaches using constraints xsb used past ramakrishnan mukund interfacing solver written explicit constraint store management prolog pemmasani however approaches quite cumbersome lack ease use generality chr closely related implementation work paper builds cui warren presents framework constraint solvers written attributed variables attributed variables much cruder tool writing constraint solvers though implementation issues constraint store encoding scheduling strategies hidden chr become user responsibility programs attributed variables also tabled setting user think integration issues attributed variables solver chr provided generic solutions work chr constraint solvers powerful features accessed parametrized options guo gupta propose technique dynamic programming tabling guo gupta somewhat similar one proposed entailment checking particular argument new answer compared value previous answer either one kept depending optimization criterion technique specified particular numeric arguments whereas constraint stores general investigation technique certainly necessary establish extent applicability part work previously published international conference logic programming schrijvers warren colloquium implementation constraint logic programming systems schrijvers schrijvers briefly discuss two applications chr tabling field model checking integration chr xsb shown make implementation model checking applications constraints significantly easier next step search applications explore expressive models checked currently viable traditional approaches applications also serve improve currently limited performance assessment chr tabling shipment problem given indication improved performance behavior practice theoretical reasoning indicates possibility well global chr store proven one main complications tabling chr constraints particular chr programs possible replace global data structure localized distributed ones assessment ramakrishnan approach shown promising partial abstraction subsumption closely related former transforms call general call latter looks answers general calls none available still executes actual call still look implement partial abstraction implications variant subsumption based tabling rao finally better automatic techniques entailment testing schrijvers projection investigated context slgd schrijvers conclusion presented framework tabled clp based integration chr tabled system problems solved time hoc constraint solver integrations solved chr constraint solvers solutions formulated call abstraction tabling constraint stores answer projection answer combination optimization answer set optimization hence integrating particular chr constraint solver requires much less knowledge implementation intricacies decisions made higher level performance turns bottleneck integration stable implementation may specialized language using tchr implementation specification novel contribution generalized answer set compaction may certainly contribute towards end finally would like mention xsb release number presented chr system integrated tabling publicly available since december see http acknowledgements grateful beata giridhar pemmasani ramakrishnan interesting discussions help applications tabled execution constraints field model checking thank anonymous reviewers helpful comments references clark negation failure logic databases gallaire minker eds plenum press new york codish demoen sagonas program analysis languages using xsb international journal software tools technology transfer cui system tabled constraint logic programming thesis state university new york stony brook cui warren system tabled constraint logic programming proceedings international conference computational logic lloyd dahl furbach kerber lau palamidessi pereira sagiv stuckey eds lecture notes computer science vol springer verlag london cui warren attributed variables xsb electronic notes theoretical computer science dutra eds vol elsevier demoen hprolog http ramakrishnan smolka tabled resolution constraints recipe model checking systems ieee real time systems symposium orlando florida tchr framework tabled clp duck banda stuckey compiling ask constraints iclp proceedings international conference logic programming lecture notes computer science vol springer verlag france duck stuckey banda holzbaur refined operational semantics constraint handling rules iclp proceedings international conference logic programming lecture notes computer science vol springer verlag france theory practice constraint handling rules journal logic programming october abdennadher essentials constraint programming cognitive technologies springer verlag guo gupta simplifying dynamic programming via tabling proc sixth international symposium practical aspects declarative languages hentenryck lecture notes computer science vol springer verlag holzbaur metastructures attributed variables context extensible unification tech austrian research institute artificial intelligence vienna austria holzbaur prolog constraint handling rules compiler runtime system special issue journal applied artificial intelligence constraint handling rules april jaffar lassez constraint logic programming popl proceedings acm symposium principles programming languages acm press new york usa jaffar maher constraint logic programming survey journal logic programming kanellakis kuper revesz constraint query languages selected papers annual acm symposium principles database systems academic press orlando usa marriott stuckey programming constraints introduction mit press mukund ramakrishnan ramakrishnan verma symbolic bisimulation using tabled constraint logic programming international workshop tabulation parsing deduction vigo spain pemmasani ramakrishnan ramakrishnan efficient model checking real time systems using tabled logic programming constraints international conference logic programming lecture notes computer science springer copenhagen denmark rao ramakrishnan ramakrishnan thread time saves tabling time joint international conference symposium logic programming saraswat rinard concurrent constraint programming popl proceedings acm symposium principles programming languages acm press new york usa ramakrishnan model checking systems international conference formal engineering methods icfem dong woodcock eds lecture notes computer science vol ramakrishnan compiling constraint handling schrijvers rules efficient tabled evaluation padl ninth international symposium practical aspects declarative languages hanus lecture notes computer science springer verlag schrijvers analyses optimizations extensions constraint handling rules thesis department computer science leuven belgium schrijvers demoen chr system implementation application first workshop constraint handling rules selected contributions meister eds ulm germany schrijvers demoen duck stuckey automatic implication checking chr constraints electronic notes theoretical computer science vol schrijvers optimal constraint handling rules theory practice logic programming schrijvers warren constraint handling rules tabled execution iclp proceedings international conference logic programming demoen lifschitz eds lecture notes computer science vol springer verlag france schrijvers warren demoen chr xsb ciclops proceedings colloquium implementation constraint logic programming systems lopes ferreira eds university porto mumbai india toman computing semantics constraint extensions datalog proceedings workshop constraint databases number lecture notes computer science cambridge usa toman constraint databases program analysis using abstract interpretation constraint databases applications second international workshop constraint database systems cdb gaede brodsky srivastava vianu wallace eds lecture notes computer science vol springer verlag toman memoing evaluation constraint extensions datalog constraints international journal special issue constraints databases december warren xsb programmer manual version vols http wielemaker release http wolper expressing interesting properties programs propositional temporal logic popl proceedings acm symposium principles programming languages acm press new york usa
6
jan coevolutionary intransitivity games landscape analysis hendrik richter htwk leipzig university applied sciences faculty electrical engineering information technology postfach leipzig germany email richter january abstract intransitivity supposed main reason deficits coevolutionary progress inheritable superiority besides coevolutionary dynamics characterized interactions yielding subjective fitness aiming solutions superior respect objective measurement approximation objective fitness may instance generalization performance paper link measures intransitivity fitness landscapes address dichotomy subjective objective fitness explored approach illustrated numerical experiments involving simple random game continuously tunable degree randomness introduction despite earlier promises optimism using coevolutionary algorithms ceas evolving candidate solutions towards optimum remains complicated almost arcane matter generally unclear prospects success prominently caused defining feature coevolution ceas driven fitness originates interaction candidate solutions candidate solutions words fitness obtained interactions subjective depends candidate solutions actually interacting coevolutionary interactions take place interactions understood constitute tests leads labeling kinds coevolutionary problems problems problems particularly occur game playing contexts instance situations players strategies subject competitive coevolutionary optimum finding argued games players strategies player space understood phenotypic according framework fitness landscapes strategy space genotypic view adopted following discussion notwithstanding coevolutionary dynamics induced subjective fitness aim using cea identifying candidate solutions superior general sense hence coevolution next fitness resulting limited number tests second notion fitness helpful fitness generalizing subjective fitness occurs problems different forms games players strategies usually absolute quality measurement absolute quality measurement would require evaluate possible test cases computationally infeasible circumvent problem enable experimental studies relationships subjective fitness absolute quality measurements number games proposed instance minimal substrates artificial problem settings postulate absolute quality called objective fitness line reasoning coevolutionary dynamics understood aiming progress objective fitness proxy subjective fitness consequently main difficulty designing ceas stems question well subjective fitness represents objective fitness analogy postulated objective fitness number games games players strategies general quality measurements subjective fitness interpretable objective fitness implies game playing may approximation objective fitness different approximations possible instance different instances generalization performance put another way interpretation suggests game playing different degrees objective fitness application examples ceas frequently reported experiments showing mediocre performance mostly attributed coevolutionary intransitivity generally speaking intransitivity occurs superiority relations cyclic cyclic superiority relations consequences coevolutionary dynamics intransitivity may occurs across subsequent generations case may solutions generation better applies respect however imply solutions strictly better cyclic superiority relations occur across generations connoted coevolutionary dynamic intransitivity paper problem coevolutionary intransitivity linked dichotomy subjective objective fitness done combining measuring approach intransitivity proposed samothrakis framework codynamic fitness landscapes recently suggested codynamic fitness landscapes enable analyse relationship objective subjective fitness possible solutions coevolutionary search process landscape approach proposed particularly explores coevolutionary intransitivity related issue remainder paper structured follows next section concept codynamic landscapes composed objective subjective fitness briefly recalled sec intransitivity discussed discussion simple random game introduced degree randomness continuously tuned shown intransitivity characterized different types intransitivity measures numerical experiments simple random game presented sec sec concludes paper summary coevolution codynamic landscapes number games section focuses attention approach recently suggested useful understanding coevolutionary dynamics codynamic fitness landscapes landscapes allow studying relationship objective subjective fitness turn mainly determines coevolutionary dynamics define objective fitness triple search space search space points neighborhood structure fitness function fobj objective fitness landscape considered describe optimization problem solved cea problem solving based coevolutionary interactions potential solutions yields subjective fitness hence subjective fitness viewed way cea perceives problem posed objective fitness appears sensible assume subjective landscape possesses search space neighborhood structure fitness function fsub less strongly deviates objective fitness fobj seen subjective fitness usually overestimating underestimating objective fitness moreover coevolutionary run deviation objective subjective fitness dynamic word coevolutionary dynamics dynamically deforms subjective fitness landscape following link subjective objective fitness exemplified number game also called coevolutionary minimal substrate number game population players considered inhabits search spaces search space instance game players may possible values objective fitness function defined search space fobj consequently casts objective fitness landscape subjective fitness result interactive number game therefore calculation subjective fitness fsub player sample evaluators randomly selected sample statistically independent sample next calculation denote size sample evaluators number game defines fitness fsub respect sample calculated counting averaged number members smaller objective fitness fobj objective fitness fobj eval fsub eval fobj fobj otherwise note number game considered postulates objective fitness defines subjective fitness obtained coevolutionary interaction next section perspective subjective objective fitness applied games players strategies population players engaged evaluate subjective fitness interaction players static coevolutionary intransitivity games relation called intransitive set three elements relation always imply instance intransitivity superiority relations cyclic obvious purest form appears game playing cyclic superiority relations mean three players three strategies player using wins wins loses simple example game paper wins rock scissor wins paper scissor loses rock thus paper rock scissor possible strategies player adopt game note kind intransitivity feature preference single round game hence intransitivity static actually induced immediate link evolutionary dynamics consequently next question superiority relations resemble situations coevolutionary intransitivity understood dichotomy subjective objective fitness obtain evolutionary dynamics players need adjust strategies game needs played one round words studying iterated games also interesting serves juxtapose static intransitivity coevolutionary dynamic intransitivity one way build relationship game results fitness apply rating system examples rating systems elo system evaluate chess players model paired comparison recently shown methodology also useful analyzing coevolutionary intransitivity rating system creates probabilistic model based past game results seen predictor future results significantly rating system also imposes temporal ranking players following ideas applied simple random game degree randomness tuned game consists players using strategy perform players called round robin tournament game outcome interpreted payoff subject players ratings random given game players games single round robin define percentage prand games random result obtain prand games whose outcome chance predefined distribution remaining games end deterministically according rating difference players thus game falls category perfect incomplete information viewed series round robin tournaments interaction determinism random chance creates temporal rating triangles player scores high results high rating certain time may also lose nominal weaker rating player time may may show characteristics towards third player behavior complies coevolutionary dynamic actually intransitivity addition game also reproduce intransitivities players maximal number static intransitivities intra max see three players form triangle cyclic superiority relations win one game two thus large average number actual static intransitivities table results simple random game three instance game rank determined enumeration gives static intransitivity measure called intransitivity index itx alternative samothrakis suggested use difference measure based divergence kld prediction made rating system actual outcome measure static intransitivity quantities itx kld subjects numerical experiments reported next section game coevolutionary setting involves finding strategy player adopt score best according given understanding performance clearly implies performance measurement generalize single round robin tournament thus several instances round robin tournaments overall results also accounted generalization performance instance round robin scaled generation coevolutionary generalization performance defined mean score solution possible test cases considering possible test cases may computationally infeasible chong used statistical approach involving confidence bounds estimate amount needed test cases given error margin given understanding assuming strategies equally likely selected test strategies generalization performance strategy gpi sci sci score strategy yields instance round robin tournament needed number instances depends bounds given chong generalization performance also builds relationship actual game results fitness therefore seen alternative rating system example simple random game assume five players denoted act upon unknown internally adjusting strategy assume evaluation players considered equal rating say players engaged round robin tournament win scores loss counts results achieved depend rating random assume results scored see tab column player wins games loses games players build triangle cyclic superiority relations results somehow violate expectations established initial rating could met players winning games furthermore results show clearly game completely deterministic respect evaluation words rating ranking approach subsumes game history predictor future game results quality prediction depends percentage random game results also results round robin evident whether player successful strategy objectively good strategies players objectively poor round robin gives comparison rating indicating players rank hence ranking difference results showing otherwise hence round robin tournament updates rating producing rating according elo system adopted done via first calculating expected outcome exi rtj quantity exi summarizes winning probabilities player respect players round single game quantity exi expected winning probability player winning player example players rating expected outcome also namely exi rating players updated according difference expectation actual score rti rti sci exi called tunes sensitivity rating results single round robin tournament using new rating gives differences players see tab column best player ranked highest poorest player ranked lowest also note players engaged intransitivity triangle still share rating assume next round game strategies players adjusted supposedly competitive coevolutionary search process see results tab column outcome generally confirms impression first round player still strong violating expectations good scores players poor score player note round players well players form triangle intransitivity contrary first round players neither score rating rating calculated according given tab column two instances round robin tournament first estimation generalized performance obtained averaging scores according results given tab column conforming account quality evaluation player best ranked first followed players ranking respect generalization performance given tab column note ranking gives average ranks tied ranks player leads rank preserves sum ranks note ranking almost equal ranking according ratings exception players similar equal rating simple random game interpreted according landscape view subjective objective fitness recall subjective fitness associated fitness gained individuals interaction others according view round robin tournament yields subjective fitness fsub objective fitness turn generalizes subjective fitness terms absolute quality measurement possible candidates rating fobj generalization performance fobj defining subjective objective fitness way also gives raise reformulating coevolutionary intransitivity generally speaking coevolutionary intransitivity involves cycling objective solution quality cycling may caused subjective fitness adequately representing objective fitness hence subjective fitness may drive evolution search space regions visited evaluated differently generally directions favorable hence coevolutionary dynamic intransitivity understood temporal mismatches order subjective objective fitness consider example game whose results given tab suppose another instance round robin played omitting specific results scores tab column rating considered objective subjective player rating score temporal mismatch objective subjective fitness suppose moment rating declared objective fitness indeed quantity achieve coevolutionary search cea use score guiding search player would likely misguided hand player rating score show match rat rat fig scores ratings generalization performance two percentages random games prand prand subjective objective fitness conditions reformulated employing ranking function tied ranks gives measure coevolutionary intransitivity hence temporal mismatch ptm defined average number rank mismatches rank fobj fobj fobj rank fsub fsub fsub player given number instances round robins alternative measure coevolutionary intransitivity related ptm discussed stems fact coevolutionary selection based comparing subjective fitness values hence fitness ranking within one instance game one generation gives indication direction preferred difficulties search process caused well subjective fitness represents objective fitness difference ranking according subjective fitness ranking according objective fitness instance also suitable measure coevolutionary intransitivity therefore quantity fsub rank fobj players instance another measure coevolutionary intransitivity called collective ranking difference crd observations discussed far suggest relationships namely static intransitivity ambiguous effect coevolutionary progress coevolutionary dynamic intransitivity expressed ranking differences objective subjective fitness words quantities ptm crd may useful measures coevolutionary intransitivity relationships studied numerical experiments topic next section prand prand prand itx itx itx itx itx kld itx ptm ptm fig intransitivity measure itx versus score rating generalization performance scatter plots relationships intransitivity measure itx kld ptm numerical experiments report experimental results starts time evolution simple random game introduced last section fig shows scores ratings generalization performances two different percentages random results prand prand players instances round robin experiments initialized ratings slightly spread around value randomly determined chance win lose evenly distributed figures show curves single instance randomly determined part game outcomes hence curves meant statistically significant illustrating typical behaviour seen low number random outcome fig scores player achieves mainly defined initial rating small differences initial rating amplified lead rankings rating generalization performance large number random game results fig scores almost purely chance evenly distributed consistently ratings generalization performances tend approach expected value implied underlying distribution players alike next experiments address relationships static intransitivity expressed measure itx quantities representing subjective well objective fitness scatter plots given fig players five levels prand experimental setup includes repetition run times instances round robin first instances discarded omit transients note gives sufficient number instance according bounds generalization prand crd crd prand itx prand crd max max ptm crd itx crd crd crd fig intransitivity measure crd versus time average itx max max scatter plots relationships intransitivity measure crd itx ptm performance hence results seen statistically significant fact confidence intervals small depicted figures seen level randomness game obtain distinct level static intransitiviy measured itx rising prand also increases itx interesting almost variation score rating generalization performance given level intransitivity itx indicates static intransitivity seems little influence neither subjective objective fitness particularly visible low levels prand clear sorting according range fitness given player achieved connected differences itx next experiment explores relations index based measure itx probabilistically motivated measure kld see fig well following figure results four different number players five levels random results prand average itx denoted hitxi given quantities normalized according number players seen quantities proportional relationship one hand generally confirms result show itx brittle kld studied game concluded quantities interchangeable next relationship static intransitivity dynamic intransitivity measure temporal mismatch ptm studied fig mismatch ptm based rating objective fitness fig linear relationship least small values prand mismatch ptm based generalization performance fig sensible conclusions relations drawn finally focus dynamic intransitivity measure collective ranking difference crd see fig fig shows scatter plots crd based score subjective fitness generalization performance objective fitness players five levels prand seen although different prand give different itx difference crd fig however almost linear relation measure crd max max least lower levels prand interpreted crd scaling time evolution subjective objective fitness time evolution static intansitivity compare fig show characteristics scaling different number players different levels randomness prand see fig fig relation crd based rating objective fitness crd crd based generalization performance objective fitness crd shown seen quantities scale linear prand large allows conclude quantities account intransitivity properties finally relation crd ptm shown fig seen ptm scales weaker crd particularly small number players high randomness conjectured crd meaningful coevolutionary intransitivity measure ptm conclusions paper contribution ongoing discussion effect intransitivities coevolutionary progress approach presented allowed link measuring approach intransitivity framework fitness landscapes enable analyzing relationship objective subjective fitness experimentally illustrating approach simple random game continuously tunable degree randomness proposed apart random game results depend ratings players reflect past success player thus game proposed characterizes many games outcome also function chance well predictions based game history studying effect intransitivity measures explored extension existing static intransitivity measures dynamic measures account coevolutionary intransitivity proposed measure based rankings subjective objective fitness shown coevolutionary intransitivity understood ranking problem hence accounted ranking statistics enlarge scope presented approach next step intransitivity measures could studied types game instance social games iterated prisoner dilemma board game othello references antal ohtsuki wakeley taylor nowak evolutionary game dynamics phenotype space proc nat acad sci bradley terry rank analysis incomplete block designs method paired comparisons biometrika jong intransitivity coevolution yao eds parallel problem solving viii berlin heidelberg new york chong tino yao measuring generalization performance coevolutionary learning ieee trans evolut comp chong tino yao improving generalization performance coevolutionary learning ieee trans evolut comp frank harary cluster inference using transitivity indices empirical graphs jour statist assoc jong objective fitness correlation lipson proc genetic evolutionary computation conference gecco acm new york elo rating chess players past present batsford london funes pujals intransitivity revisited coevolutionary dynamics numbers games beyer reilly eds proc genetic evolutionary computation conference gecco morgan kaufmann san francisco kallel naudts reeves properties fitness functions search landscapes kallel naudts rogers eds theoretical aspects evolutionary computing berlin heidelberg new york langville meyer science rating ranking princeton university press princeton luce individual choice behaviours theoretical analysis john wiley new york miconi coevolution work superiority progress coevolution vanneschi eds eurogp berlin heidelberg new york nowak tarnita antal evolutionary dynamics structured populations phil trans soc popovici bucci wiegand jong coevolutionary principles rozenberg kok eds handbook natural computing berlin heidelberg new york richter engelbrecht recent advances theory application fitness landscapes berlin heidelberg new york richter fitness landscapes depend time richter engelbrecht eds recent advances theory application fitness landscapes berlin heidelberg new york richter codynamic fitness landscapes coevolutionary minimal substrates coello coello proc ieee congress evolutionary computation ieee cec ieee press piscataway samothrakis lucas runarsson robles coevolving agents measuring performance intransitivities ieee trans evolut comp van wijngaarden jong evaluation diversity rudolph eds parallel problem solving berlin heidelberg new york watson pollack coevolutionary dynamics minimal substrate spector eds proc genetic evolutionary computation conference gecco morgan kaufmann san francisco
9
nov computing visual system trainable guglielmo montone laboratoire psychologie perception paris descartes paris france regan laboratoire psychologie perception paris descartes paris france alexander terekhov laboratoire psychologie perception paris descartes paris france avterekhov abstract work propose system visual question answering architecture composed two parts first part creates logical knowledge base given image second part evaluates questions knowledge base differently previous work knowledge base represented using computing choice advantage operations system namely creating knowledge base evaluating questions differentiable thereby making system easily trainable fashion introduction visual question answering vqa visual turing test terms refer task machine provided picture question picture machine asked return answer question tasks become popular last year thanks emergence several datasets containing images associated questions answers classical deep neural network approaches face tasks training architectures composed several parts architectures often rnn often lstm encoding question producing answer cnn analyzing image main idea behind architectures project question image relatively low dimensional space two compared approaches give impressive results question simple properties image like color bus however results become worse question complex involves verifying one relations among objects picture fruit plate different one basket another group approaches builds architectures perceiver evaluator perceiver receives image input build knowledge base relative input evaluator executes question knowledge base produce answer disadvantage approaches computations performed different parts system different nature leading cumbersome architectures complex train fashion main contribution work show least simple case propose possible build architecture evaluator performs computations kind perceiver reason architecture easily trained fashion particular conference neural information processing systems nips long beach usa figure images dataset architecture exploit properties computing well known fact defining vectors space possible store retrieve information vectors simple operations use store values logical constants also values relations among constants constitute knowledge base system architecture ffw network asked associate image input describing image returned output queried extract answers given questions questions consisting sets differentiable operations parameters network updated gradient descent procedure order minimize number wrong answers present paper prove architecture successfully trained simple vqa task paper organized follows section present dataset show possible encode knowledge base retrieve information finally section describe training procedure comment results tests dataset dataset composed rgb images image two geometrical figures four possible colors geometrical figures four different positions image namely colors shapes used following colors red green magenta orange shapes circle square triangle cross possible images containing two figures created sample images dataset presented figure number total images images used test set rest images used training set representation knowledge base picture dataset described terms following set natural language concepts position color shape red green magenta orange circle square triangle cross example first picture figure described following sentence position shape type square color magenta position shape type square color green following show using method known possible store retrieve information contained sentence one vector take dimensional vectors component randomly chosen label using letters italics like let define two operations entangle grouping entangle defined follows xor please notice xor xor sometimes refer previous operation also grouping defined follows also useful define distance two vectors choose distance following function cos associate concepts listed previous paragraph random representing concept name concept written italics example concept color correspond color image dataset associate following way illustrate example consider first picture figure natural language description picture following position shape type square color magenta position shape type square color green description associate following shape square color magenta shape square color green shown information stored vector retrieved example retrieve information shape position vector vector vector shape resulting vector much closer vector square three vectors triangle circle cross following previous example associated image dataset building following dataset querying knowledge base vectors defined contain information pictures queried retrieve properties picture querying consists applying set operations specific sequence following present queries used train architecture queries presented first natural language followed corresponding set operations space let first define set positions ositions following questions question circle picture cos circle shape pos pos ositions pos ositions pos question color green cos green color pos pos question magenta triangle cos magenta triangle pos ositions pos question square cos square shape question shape position one cos shape shape previous questions positive negative answer pictures dataset picture associate five numbers namely number equal answer question picture true otherwise use values training network target values let add values dataset previously defined training testing trained ffw network two hidden layers rectified linear units network asked return output describing picture reason output network layer nodes hyperbolic tangent activation function network trained order associate input image vector queried returned correct information image reason error function minimized training composed term questions particular equation took left side equation computed value image dataset substituting equation term output network net net function implemented network result computation forced closer answer question picture true otherwise result computation forced closer let consider example question equation defined error term cos circle shape pos net pos way defined error term questions previous paragraph called error terms relative question question question question error function minimized training following sum developed two kinds test network first test network asked answer questions used training new data second test network asked answer new questions used training first test network tested dataset data used training example test set evaluated answer questions substituting output net net vector equation representing question assuming answer true inequality respected false otherwise network reached accuracy questions second experiment tested network new questions particular asked questions similar question relative three shapes square triangle cross cases obtained accuracy values interestingly worst performance obtained cross shape shape never used questions training conclusions paper presented architecture vqa uses encode knowledge base evaluating query choice makes system easy train fashion system proved work well generalize well simple task experiments needed test architecture challenging natural benchmarks acknowledgments work funded erc advanced grant number feel erc proof concept grant number feelspeech references stanislaw antol aishwarya agrawal jiasen margaret mitchell dhruv batra lawrence zitnick devi parikh vqa visual question answering proceedings ieee international conference computer vision pages haoyuan gao junhua mao jie zhou zhiheng huang lei wang wei talking machine dataset methods multilingual image question advances neural information processing systems pages aditya joshi johan halseth pentti kanerva language recognition using random indexing arxiv preprint pentti kanerva hyperdimensional computing introduction computing distributed representation random vectors cognitive computation jayant krishnamurthy thomas kollar jointly learning parse perceive connecting natural language physical world transactions association computational linguistics abbas rahimi pentti kanerva jan rabaey robust classifier using hyperdimensional computing proceedings international symposium low power electronics design acm mengye ren ryan kiros richard zemel image question answering visual semantic embedding model new dataset proc advances neural inf process syst licheng eunbyung park alexander berg tamara berg visual madlibs fill blank description generation question answering proceedings ieee international conference computer vision pages
9
imperial college london feb department computing reinforcement learning neurally controlled robot using dopamine modulated stdp richard evans submitted partial fulfilment requirements msc degree advanced computing imperial college london september contents introduction motivation project aims thesis outline background neurons biological neurons model izhikevich model models neural networks artificial neural networks spiking neural networks reinforcement learning machine learning markov algorithm eligibility traces continuous parameter spaces reinforcement learning brain bcm theory dependent plasticity dopamine modulated stdp stdp plasticity stability neural encoding taxis spiking neural network controlled robots robot training using genetic algorithm robot training using stdp robot training using reward modulated stdp methodology environment robot sensors neural model numerical approximation phasic activity sensorimotor encoding sensor neuron encoding motor velocity calculation network architecture plasticity moving dopamine response network stability taxis exploration conditioned stimulus results discussion orbiting behaviour experimental results discussion food attraction learning experimental results discussion plasticity learning experimental results discussion dopamine response secondary stimulus experimental results discussion secondary behaviour learning experimental results discussion dual behaviour learning experimental results discussion conclusions reinforcement learning biological plausibility conclusions future work abstract recent work shown stdp solve many issues associated reinforcement learning distal reward problem spiking neural networks provide useful technique implementing reinforcement learning embodied context deal continuous parameter spaces better generalizing correct behaviour perform given context project implement version stdp embodied robot food foraging task simulated dopaminergic neurons show robot able learn sequence behaviours order achieve food reward tests robot able learn behaviour subsequently unlearn behaviour environment changed trials moreover show robot able operate environment whereby optimal behaviour changes rapidly agent must constantly relearn complex environment consisting robot able learn attraction trials despite large temporal distance correct behaviour reward achieved shifting dopamine response primary stimulus food secondary stimulus work provides insights reasons behind observed biological phenomena bursting behaviour observed dopaminergic neurons well demonstrating spiking neural network controlled robots able solve range reinforcement learning tasks acknowledgements would like thank supervisor murray shanahan whose support guidance helped greatly allowing conduct research neural robotics chapter introduction motivation ability robot operate autonomously highly desirable characteristic applications wide range areas space exploration cars cleaning robot assistive robotics looking animal kingdom key property large variety animals possess able learn behaviours perform order receive reward avoid unpleasant situations example foraging food avoiding prey also useful property robots want robots achieve goal exact sequence behaviours needed achieve goal may highly complex change time learning paradigm known reinforcement learning researched context machine learning many years despite even simple animals able outperform robots vast majority real world reinforcement learning problems recent times underlying neural processes brain solve reinforcement learning tasks started revealed changes connection strength neurons long thought key process animals able learn key process believed control connection weights modified spike timing dependent plasticity stdp using spiking neural networks implement models learning plasticity control robots two main advantages firstly hoped neurally inspired control algorithms least domains able outperform classical machine learning counterparts due vast arrays problems real brains deal implicit ability deal continuous environmental domains secondly implementing current models neurological learning embodied context gain greater insight range behaviour explained models also models fail project aims project attempt solve reinforcement learning task food foraging poison avoidance using robot controlled spiking neural network incorporating several aspects brain observed aim show robot learn attraction avoidance behaviours dynamic environment designing environment requires sequence behaviours food reached aim show robot learn propagating reward signal dopamine stimulus way aim show robot able learn sequences behaviour larger previously demonstrated work builds models izhikevich others providing neurological models used control robot use models control robot task based work chorley seth aim show several key extensions incorporating dopamine signal network directly well using generic network architecture robot able deal much wider variety problems learn longer learn sequences behaviour previous implementations able learn vital properties agent interacting real world key step towards fully autonomous agent deal wide variety problems scenarios real world provides mechanisms plasticity stability play key role many properties brain learning memory attention implementing neurally inspired system incorporates many current neural models aim show well models able explain learning behaviours seen animal world thesis outline remainder report structured five chapters chapter review current research relevant literature context reinforcement learning neurally controlled robots chapter presents techniques models methodology used project architecture robot environment spiking neural network control architecture discussed results evaluating robot wide range scenarios discussed chapter along discussion properties robot allow achieve results chapter relate results wider context reinforcement learning neurological models discuss implications finally chapter present possible future work based results paper chapter background neurons underlying biological structure neurons models used simulate discussed section biological neurons brain consists vast connected network neurons neurons subdivided different types however general architecture almost neurons figure shows basic structure typical neuron within neuron communication takes place via electrical properties dendrites cell body soma axon myelin sheath axon hillock axon terminal synapse nucleus figure structure neuron cell figure shows membrane potential neuron plotted time whilst receiving constant input current resting state membrane potential interior neuron sits approximately neuron receives inputs several neurons via dendrites one presynaptic neurons fires effect raising lowering membrane potential neuron membrane potential axon hillock see figure reaches critical threshold voltage gated ion channels open along axon causes neuron rapidly depolarize referred neuron spiking firing wave depolarization reaches axon terminal converted chemical signal passed across synapse next neuron depolarization reaches critical threshold second voltage gated ion channel opened causes neuron rapidly repolarize returning resting potential repolarization overshoots resting potential causing hyperpolarization firing neuron enters refractory period fire two main types neurons used paper excitatory inhibitory neurons excitatory neurons effect raising membrane potential neuron increasing likelihood firing whereas inhibitory neurons effect decreasing membrane potential neuron decreasing likelihood firing membrane potential single neuron time membrane potential firing threshold resting potential hyperpolarization time figure membrane potential neuron receiving constant input current plotted time synapse neurons associated weight strength degree membrane potential raised lowered following spike mechanisms neurons communicate several useful properties firstly communication within cell fast via use voltage gated ion channels secondly communicating nothing fashion cell network neurons able represent highly complex functions research shown spiking neural networks represent function within specified degree accuracy model hodgkin huxley developed accurate model membrane potential neuron based modelling ion channels within neuron equation defined capacitance neuron membrane potential neuron various ionic currents pass cell external current coming neurons time ionic currents modelled using differential equations details included full definition found hodgkin huxley equation biologically accurate however incurs heavy computational cost infeasible practical situations izhikevich model several attempts made formulate model provides good compromise biological accuracy computational feasibility one best models terms computational efficiency biological accuracy formulated izhikevich izhikevich model membrane potential modelled membrane potential recovery variable determines refractory period neuron modelled parameters model spike occurs membrane potential reset recovery variable incremented formally four variables shown able create neurons wide variety behaviours figure summarizes neuronal types modelled varying parameters peak cay ate parameter reset parameter reset lts sensitivity regular spiking lts parameter parameter intrinsically bursting chattering fast spiking resonator spiking lts figure overview types neurons modelled izhikevich models neural networks section different approaches used model networks connected neurons discussed artificial neural networks first computational model neural network developed artificial neural network ann ann consists set processing units neurons neuron computes weighted sum inbound connections may neurons external inputs value passed activation function result passed next set neurons output system common activation function used sigmoid function provides differentiable approximation nothing processing used real neurons figure shows model single neuron ann electronic version figure reproduction permissions freely available inputs weights output transfer function activation function wnj figure perceptron computes weighted sum inputs weights passes value activation function produce binary output initially ann consisted single layer neurons ann known perceptrons limited represent linear functions inputs example classification task data must linearly separable single layer network able classify data correctly combat problems multilayer perceptron introduced outputs previous layers serve inputs next layer development efficient training algorithm multilayer perceptron known updates weights network minimize squared error set training examples allowed ann used wide variety fields biggest limiting factor artificial neural networks especially model brain process information encoded time domain spiking neural networks realistic model biological neural networks developed spiking neural networks snn advantage able process temporally distributed data well large memory capacity unlike ann commonly directed acyclic graphs snn amenable networks cycles cycles allow spiking neural networks form working memory individual neurons spiking neural network modelled variety ways several models outlined section methods snn trained outlined section reinforcement learning machine learning reinforcement learning process agent automatically learn correct behaviour order maximize performance reinforcement learning important learning paradigm robot interacts real world scenario robot often explicitly told correct action given environment state must inferred rewards punishments received action performed standard reinforcement learning agent given reward performs action specific state seeks maximize reward time sutton barto give good history reinforcement learning algorithms key approaches relevant paper outlined markov algorithm majority work reinforcement learning modelled problem markov decision process agent visit finite number states state agent perform finite number actions transform environment new state state reached performing action depends previous state action interaction shown figure agent policy defined probability choosing action given state important function reinforcement learning gives expected future reward given state action defined agent state reward action environment figure interaction given state reward time agent must decide action perform action update state result reward figure sutton barto determines importance place rewards closer time ones distant know optimal policy becomes trivial pick action highest expected reward algorithms concerned finding value underlying feature algorithms concept temporal difference learning difference predicted reward received reward used update agents policy algorithms divided learning algorithms assume series states actions generated figure eventually reach terminating state learning performed terminating state reached algorithms however aim continuously update agents estimate agent explores environment algorithms provide advantage wait end episode incorporate new information case paper agent interacting real world always clearly defined termination condition causes episode end within set algorithms distinction algorithms algorithms attempt learn optimal policy whilst simultaneously following estimate optimal policy note agents estimate optimal policy likely different actual optimal policy method get stuck local minima converge optimum policy algorithms hand attempt learn optimal policy whilst necessarily following choosing actions probability related expected reward advantage agent explore state space however comes cost possible reduced reward compared methods due exploration states paper demonstrate mechanism learning achieved using spiking neural networks best known algorithm sarsa outline given algorithm key step observed reward predicted next time step used estimate correct compared predicted give prediction error used update estimate algorithm sarsa initialize arbitrarily repeat episode initialize choose using policy derived repeat step episode take action observe choose using policy derived terminal best known algorithm learning pseudocode given algorithm key difference algorithm takes maximum next state actions algorithm initialize arbitrarily repeat episode initialize choose using policy derived exploration repeat step episode take action observe terminal eligibility traces sarsa algorithm described one state lookahead look reward next time step updating due often revisit state many times algorithm converge encounter problems relax markov property example reward received action received several states later case shown algorithms may fail converge correct would like able incorporate reward future states update function accomplished introducing concept eligibility trace eligibility trace state keeps weighted track visits state way update previously visited states receive new reward concept eligibility trace becomes important discussing dynamics reinforcement learning brain implementation stdp figure demonstrates eligibility trace changes time state repeatedly visited time figure eligibility trace state time repeatedly visited incorporate eligibility trace sarsa algorithm referred sarsa define eligibility trace state action update becomes step size amount update values time step determines importance place recent rewards future rewards value next state action note update applied states actions unlike previous algorithm restricted current pair full sarsa procedure outlined algorithm continuous parameter spaces far discussion limited reinforcement learning discrete state action spaces simplest way applying continuous state space simply divide continuous space becomes discrete one however two main issues firstly may end large parameter space especially wish fine grained representation continuous parameter space mean algorithm take long time converge optimal solution probability visiting state low second issue lose ability generalize often case knowing expected future reward particular state tell something expected reward nearby states continuous case becomes continuous valued function treat update discrete algorithm example function problem becomes one generalizing specific inputs standard machine learning problem algorithm sarsa initialize arbitrarily repeat episode initialize repeat step episode take action observe choose using policy derived terminal solved among things neural networks decision trees genetic algorithms gradient descent another method performing reinforcement learning continuous parameter space use spiking neural networks full description spiking neural networks given section spiking neural networks offer advantage constructed correctly generalize learn correct action previously unseen state reinforcement learning first investigated algorithms developed fully understood brain performed similar tasks shown following section underlying processes used brain many correlates reinforcement learning algorithms paper implement spiking neural network version reinforcement learning underlying processes implementation compare standard machine learning algorithms reinforcement learning discussed section reinforcement learning brain recent times shown brain implements reinforcement learning system similar proposed sutton barto mechanisms happens outlined bcm theory bienenstock cooper munro developed model synapses increase decrease time slow increasing decreasing synaptic weights known long term potentiation ltp long term depression ltd respectively bcm theory amount synaptic weight changes dependent product presynaptic firing rate function postsynaptic activity low firing rates negative high firing rates positive effect neuron causing lot firing neuron synapse potentiated evidence bcm model seen brain however account synaptic potentiation based spike timings also observed brain dependent plasticity main mechanism connections brain modified time believed dependent plasticity core concept neuron connection strength increased activity followed activity weakened activity followed activity synaptic weight update rule stdp defined weight update time difference pre post synaptic neurons firing constants define stdp applied time constants normally chosen long term depression favoured long term potentiation prevent uncontrollable growth figure shows changes respect shows window stdp act large values negligible effect figure graph showing weight update relates tpost tpre parameter two main update schemes followed calculating change weight scheme spikes within stdp window considered updating weight scheme recent pre post synaptic spike considered practice methods often equivalent probability multiple firings specific neuron within stdp window low stdp thought associative learning mechanism two inputs neural network often presented together sequence connections first input second input become strengthened experiments provided evidence stdp bcm model brain therefore model synaptic weight modification need account processes izhikevich shown mechanism possible observed properties bcm model generated stdp may underlying mechanism synaptic brain indeed stdp dopamine modulated stdp stdp alone account ability animal perform reinforcement learning concept reward observed dopamine modulate synaptic plasticity received within window dopamine brain modulated way dopaminergic neurons neurons use dopamine neurotransmitter majority dopaminergic neurons contained ventral part mesencephalon within region important area ventral tegmental area vta vital reward circuitry brain figure shows main dopamine pathways brain vta connections nucleus accumbens plays important role perception pleasure well hippocampus associated planning complex behaviour decision making moderating social behaviour whenever vta stimulated produces burst spiking activity turn raises level dopamine nucleus accumbens hippocampus striatum frontal cortex substantia nigra nucleus accumbens vta hippocampus figure main dopamine pathways human brain dopaminergic neurons characterized two different firing patterns absence stimulus exhibit slow firing rate known background firing stimulated dopaminergic neurons exhibit burst firing burst firing neurons fire rapid bursts followed period inactivity behaviour illustrated figure one important especially context paper aspect burst firing many central synapses bursts facilitate neural transmitter release whereas single spikes means stimulus induced firing dopaminergic neurons result spike level dopamine whereas background firing significant effect level dopamine useful property pavlovian learning implications explored section membrane potential burst spiking behaviour neuron time figure example neuron exhibiting bursting behaviour neuron fires bursts followed periods inactivity matsumoto hikosaka recently shown via experiments monkeys dopaminergic neurons function homogeneous group previously thought neurons exhibit reward predicting behaviour increased firing reward received drop activity negative stimulus however dopamine neurons found exhibit increased response negative positive stimulus spatial location two groups distinct likely play different roles learning different ways connected rest brain though mechanism yet understood fully order dopamine able modulate synaptic plasticity relatively large order seconds shown synapses must form synaptic tag remembers activity two neurons short time precise chemical process underlies synaptic tagging yet known research shown synaptic tag long term potentiation dissociable induction synaptic potentiation creates potential lasting change synaptic efficacy commit change exactly needed reinforcement learning standard machine learning algorithms reinforcement learning eligibility trace modify value function estimate reward given environment state modified combination eligibility trace received reward izhikevich shown dopamine modulated stdp perform many aspects reinforcement learning accurately model many features learning observed brain one key problems reinforcement learning receipt reward often delayed action neural pattern caused reward known distal reward problem stdp described works millisecond another mechanism needed account distal reward problem solved modulated stdp inclusion variable acts synaptic tag direct correlation eligibility trace used machine learning synaptic tag incremented proportion amount stdp would normally applied synapse decays time defined defines decay rate eligibility trace hence modulation work dirac delta function effect step increasing stdp takes place value time looks similar eligibility trace machine learning shown figure variable used modify synaptic weight using synapse strength current level dopamine one important feature model deal random firings stimulus reward due low probability randomly occurring coincident firings stimulus reward eligibility trace maintains record firing wish strengthen figure shows example network able respond delayed reward pre post rew ard aye sec extracellular dopamine eligibility trace synaptic strength figure figure izhikevich showing eligibility trace cope delayed reward pre post synaptic neurons coincidentally fire delayed reward received form spike dopamine coincident firing increases eligibility trace slowly decays still positive reward received synaptic strength increased using model shown network neurons able correctly identify conditioned stimulus embedded stream equally salient stimuli even reward delayed enough several random stimuli occur reward training learnt response shows increased firing presented compared stimuli figure figure response conditioned stimulus subject modulated stdp compared several unconditioned stimulus figure reproduced izhikevich pavlovian conditioning repeated pairing conditioned stimulus unconditioned stimulus move response introduction set dopamine releasing neurons similar ventral tegmental area brain shown modulated stdp model exhibit response shifting behaviour initially connected maximum weights dopamine neurons simulating repeatedly presented preceding increase connections dopamine neurons dopamine response stimulus also reduced effect similar effect observed vivo monkeys rats modified version stdp process used paper control robot ability move dopamine response important property multiple sequential behaviours need learnt reward received model demonstrates ability model several features needed reinforcement learning however dopaminergic response stimulus correlate directly difference expected reward received reward crucial classical machine learning algorithms manifest negative drop neural activity response reward expected received dip neural activity observed vivo model developed izhikevich include sort working memory produce dip activity stdp chorley seth incorporated model modulated stdp model proposed izhikevich dual path model two pathways stimulus dopaminergic neurons excitatory pathway inhibitory pathway consequence discussed depth context new hybrid model new model accounts several key features responses observed brain well closer standard machine learning algorithms features defined chorley seth neurons display phasic activation response unexpected rewards neurons display phasic responses reliably stimuli yet respond stimuli predicted earlier stimuli responses reappear previously predictable reward occurs unexpectedly neurons display brief dip activity precisely time expected reward reward omitted overview network architect used given figure key feature allows network display dip firing reward expected received pfc basic working memory implemented sequence distinct neural firing patterns occur sequence stimulus presented means untrained stimulus followed reward producing stimulus corresponding phasic release strengthen connections current pfc firing pattern str therefore next time presented time period increase firing str hence increased inhibition cells figure network architecture used chorley seth red lines represent excitatory connections blue represent inhibitory connections neurons module fire dopamine released causes stdp pathways mean firing rate str module also modulated amount dopamine plasticity stability one problems hebbian learning stdp cause runaway feedback loops example two neurons fire sequence synaptic weight increased means future likely fire sequence likely synaptic weight increased without kind stability mechanism process quickly cause weights towards infinity one simple method often used restrict range values synaptic weights take example limiting excitatory synapse range prevents synaptic weight becoming negative hence becoming inhibitory synapse whilst also preventing synaptic weight increasing forever synaptic weight capping however prevent synapses becoming quickly potentiated maximum allowable value brain shown highly active neurons excitability decreased time highly inactive neurons excitability level increased achieved processes modify tonic level within neurons raises lowers membrane potential neuron hence increases decreases neurons excitability also shown within populations neurons global mechanism maintaining homeostatic plasticity used mechanism effect firing rate population neurons increased strength synapses within population decreased equivalently firing rate decreases strength synapses increased consequence homeostatic mechanisms foster competition competing behaviours example group neurons possibility representing two distinct behaviours neurons corresponding behaviour highly active whereas neurons behaviour active due high firing rate synaptic weights reduced result reducing activity behaviour since connected neurons high firing rates tend increase synaptic weights faster stdp neurons corresponding behaviour synaptic weights increased faster behaviour process continue neurons corresponding behaviour firing neural encoding brain needs mechanism convert external stimulus neural firing pattern convert neural firing pattern output network three main neural encoding mechanisms observed brain rate coding temporal coding population coding rate coding strength stimulus output encoded firing rate population neurons stronger stimulus emits faster firing rate equivalently faster firing rate result stronger output network joint movement encoding mechanism first observed adrian zotterman rate coding useful increase value stimulus result correlated increase response stimulus population coding involves stimulation different neurons different values external stimulus example direction movement eye could represented network neurons neuron coding direction range degrees practice usually overlap neural firing gaussian distribution firings common allows wider range values represented well robust noise population coding used various parts brain example coding direction movement observed object population coding advantage able react changes external stimulus faster rate coding temporal coding external stimulus represented precise spike timings population neurons way greater range parameters represented group neurons could represented rate coding context learning taxis behaviour simple robot rate coding sensor outputs appropriate mechanism allows strength response input stimulus correlated strength input stimulus learnt easily taxis animal world one simplest forms behaviour taxis taxis defined motion towards away directed stimulus examples include phototaxis movement directed light chemotaxis movement directed chemical gradient behaviour advantages food foraging poison avoidance series thought experiments braitenberg showed taxis behaviour well complex behaviours could implemented simple robot robots endowed two sensors two motors attraction behaviour achieved connecting left sensor right motor right sensor left motor avoidance achieved connecting left sensor left motor right sensor right motor version simple braitenberg vehicles used paper demonstrate reinforcement learning spiking neural network controlled robots variety research conducted using snn control robots initially done manually setting weights network example lewis used small network neurons control artificial leg complexity snn means manually setting synaptic weights feasible small networks various mechanisms training weights snn used outlined robot training using genetic algorithm one solution problem determining synaptic weights use genetic algorithms determine weights genetic algorithms learn best solution problem using population possible solutions possible solutions evaluated give measure fitness solution solving given problem highly fit solutions combined create new individuals replaced unfit solutions process repeated hagras used genetic algorithms train snn controlled robot exhibits wall following behaviour evolving snn robot control work well environment mostly static optimal solution change time case training done simulated environment large number robots however dynamic environments restricted online learning single robot perform well robot training using stdp another solution determining synaptic weights modify weights time using stdp described section bouganis shanahan effectively used stdp train robotic arm associate direction movement end effector corresponding motor commands achieved training stage motors randomly stimulated along actual direction movement end effector stdp able modify connection weights network subsequently stimulated neurons corresponding desired end effector direction movement given position arm would move direction effective method learning associations however limited cope goal oriented tasks agent maximize reward also requires action response happen simultaneously many real world situations case example want robot reach touch object delay initial movement act touching object robot training using reward modulated stdp several experiments conducted training snn controlled robot using stdp common learning task used demonstrate reinforcement learning morris water maze learning paradigm consists placing agent originally rat liquid environment hidden submerged platform agent must find platform order escape unpleasant experience experiment run rat platform initially found chance subsequent runs rat learns navigate directly towards platform vasilaki used spiking neural network control robot morris water maze task robot able learn successfully learn navigate directly towards hidden platform placed close platform level reward simulated external parameter artificially set platform found robot received reward reached platform robot able learn correct behaviour within small radius around hidden platform due eligibility trace decaying larger distances paper level reward embedded within network allow robot learn much larger time frame context learning taxis behaviour using reward modulated stdp chorley seth implemented simple braitenberg vehicle able learn distinguish two stimuli one would elicit reward experimental robot shown figure important note experimental robot already wired taxis attraction behaviour sensor connected opposite motor learning phase consists learning strength taxis response two different stimuli robot able learn elicit stronger taxis response stimuli also able change behaviour reward switched one stimuli another level reward simulated external parameter rather embedded network paper extend work chorley several key ways figure experimental used chorley seth implementation robot explicitly wired attraction behaviour robot ability learn either attraction avoidance behaviour incorporating level dopamine network directly aim show sequences behaviour learnt use form hunger implementation able deal situations fixed optimal behaviour environment dynamic set behaviours must learnt chapter methodology paper simple snn controlled braitenberg vehicle simulated environments consisting food poison containers robot subject stdp learn correct behaviour collect food items precise implementation details outlined chapter due fact robot environment simulated distances sizes relative real world unit associated convention take default unit centimetres environment learning task used test reinforcement learning using spiking neural networks food collection poison avoidance simulated environment simple robot four types objects exist environment food food items discs radius food item collected replaced another food item random location poison poison identical food except induces negative dopamine response see section food container food container disc radius contains single food item randomly located within food items sensed outside food container containers sensed within container empty container identical food container except contain food items well sensed differently robot see section environment walls characteristic torus robot moves right edge environment reappear left environment top bottom edges environment way robot deal collision avoidance two main variations environment simplest case figure environment consists randomly located food objects whenever robot reaches item food piece food moved another random location within environment environment referred environment second type environment food contained within containers figure food sensed robot within container containers sensed robot within container forces robot learn turn towards food containers something directly give reward food collected food container moved random location environment environment referred environment extension environment includes empty containers contain food items food environment wraps around environment food container food environment figure shows environment containing randomly placed food shows environment food items positioned within containers food sensed outside container containers sensed inside container robot sensors robot consists circular body two wheels positioned apart either side robot ensure robot always moving wheel motor velocities restricted range gives robot minimum turning circle radius robot equipped two types sensor come pairs one left one right covers range directly front robot left right figure sensor range sensors wheels figure structure robot elicit instantaneous response lasts whenever robot comes contact object sensitive response linear distance object object directly front robot elicit sensor response one whereas object edge sensor range produce response zero sensor one object sensor range closest object sensed prevents robot distracted objects away responding closer ones mechanism implemented two sensors detect object given time sensor strongest value elicits response mechanism useful situations two food items located equal distances left right robot illustrated figure without mechanism sensors sense food items robot increase left motor velocity turn right also increase right motor velocity turn left effect robot drives straight two food items shown mechanism biologically plausible simplicity mechanism implemented sensors alternative implementation would implement directly wiring robot neural network table gives overview sensors robot environment robot two food food environments food items replaced poison demonstrate ability robot modify behaviour case food range touch sensors used sensing poison environment food containers robot additional container sensor reacts also sensor elicits response entered response elicited whilst robot inside container equivalently sensor reacts entering containers contain food items useful demonstrating robot differentiate reward predicting stimulus predicting stimulus direction motion food figure without mechanism sensors respond equally robot increase left right motor speeds ultimately driving straight food items object sensed type range containers food range range touch touch touch available foodonly environment yes yes available foodcontainer environment yes yes yes yes yes table different types sensor robot environment robot equipped sensors neural model izhikevich model used simulate individual neurons overview model given section dynamics neuron modelled two differential equations inbound current spikes weighted strength synapse membrane potential recovery variable determines refractory period neuron modelled parameters model set excitatory neurons random variable range produces neurons regular spiking pattern see figure inhibitory neurons parameters set results neurons exhibit fast spiking behaviour neural spiking patterns consistent observed behaviour brain previous neural models numerical approximation differential equations given izhikevich model simulated using euler method numerical approximation given differential equation know value time euler method approximate value small time step formula applied repeatedly approximate time simulations used approximating approximating consistent izhikevich original paper phasic activity properties individual neurons mean group neurons interconnected stimulated low baseline current group neurons tend fire phase synchronous firing followed synchronous inactivity frequency neurons oscillate depends various parameters neurons network phasic activity observed brain wide range frequencies ranging model inputs outputs network updated every corresponding update frequency following sections describe exact method sensors motors simplicity update frequency explicitly implemented outside neural network one possible way oscillatory property could embedded network directly would separate group interconnected inhibitory neurons baseline level firing firing rate group oscillates inhibitory population connected relevant sensor motor neurons would force fire frequency well simulate update model used paper dynamics stdp mean without phasic activity two populations connected neurons synaptic weights depressed time much probability neuron firing neuron parameters stdp set favour long term depression synapses would depressed use phasic activity network exhibits clear behaviour example left sensor stimulated rest active phase neurons right motor neurons active say right motor action response left sensor stimulus eligibility trace left sensor right motor synapses would high point left sensor neurons fired motor neurons reward subsequently received robot would learn attraction behaviour sensorimotor encoding sensor neuron encoding neuron index food container return results range every corresponding sensor neurons stimulated current taken poisson distribution mean sensor value effect stimulus current firing rate population sensor neurons shown figure neuron firings time time figure firings group sensor neurons sensor value linearly increased zero one neurons stimulated every food container elicit instantaneous response corresponding neurons achieved stimulating current taken poisson distribution mean motor velocity calculation simulation updated every total firings left right motor neurons fleft fright respectively first calculated motor velocities calculated example left motor using following formula fleft fright vmax vmin fleft fright vleft vmax otherwise vleft output velocity left motor vmax vmin maximum minimum allowable velocities motors set respectively equation right motor equivalent left right terms switched note formula implements mechanism motors one group motor neurons greater firing rate corresponding motor set maximum velocity motor set minimum velocity helps robot exploration reduces probability robot driving straight line also mechanism helps learning behaviours amplifies result slight difference synaptic weights sensors motors robot restricted run whilst maximum minimum velocities set increase time reward predicting stimulus reward discussed section network architecture robot controlled via spiking neural network consisting connected groups neurons figure shows connections neural network robot environment subset connections robot environment shown figure rangesensor neurons connected motor neurons probability chance individual neuron connected individual motor neuron neurons connected dopaminergic neurons probability synaptic weights food touchsensor neurons dopaminergic neurons fixed set high synaptic weight forces strong response dopaminergic neurons whenever food eaten unless otherwise specified synapses plastic initial weights set zero addition neurons specified architectural diagrams also group inhibitory neurons group connected neurons probability synaptic connections initialized random values range effect slightly dampening activity network neuron groups exception dopaminergic neurons consist neurons dopaminergic group consists neurons reason dopaminergic neuron group larger left food sensor right food sensor food touch sensor external inputs connection subject plasticity connection weights left food sensor neurons right food sensor neurons food touch sensor neurons spiking neural network left motor neurons right motor neurons left motor right motor dopamine neurons external outputs figure architecture robot neural network environment connection subject plasticity range sensors touch sensors connection weights left food sensor right food sensor left container sensor right container sensor food touch sensor touch sensor touch sensor left food sensor neurons right food sensor neurons left container neurons right container neurons food touch sensor neurons touch sensor neurons touch sensor neurons left motor neurons right motor neurons left motor right motor dopamine neurons external inputs spiking neural network external outputs figure architecture robot neural network environment learning elicit dopamine response reward predicting stimulus received relies heavily random firings dopaminergic neuron group larger number neurons found increase learning speed conductance delay synapses set meaning would take spike propagate change membrane potential neuron spiking neural network robot consists neurons distribution neurons across different neural groups shown table network robot consists neurons total distribution neurons robot shown figure neuron range neuron group left food sensor right food sensor left motor right motor food dopamine neurons inhibitory neurons table organization neurons robot environment neuron range neuron group left food sensor right food sensor left container sensor right container sensor left motor right motor food dopamine neurons inhibitory neurons table organization neurons robot environment plasticity synaptic plasticity implemented using dopamine modulated stdp outline given section value function stdp time calculated stdp time difference pre post synaptic neurons firing update scheme used recent synaptic spike considered calculating stdp specific synapse optimal parameters model found running robot environment seconds times repeated large range parameters value achieved best score measures number food items collected used parameters strength stdp plotted difference pre post synaptic firing figure stronger response neuron fires neuron negative long term depression preferred long term potentiation means uncorrelated firing two populations neurons tend decay synaptic weights zero correlated firings effect increasing associated synaptic weights time useful helping robot learn correct behaviour ignore noise neuron firings stdp strength spike timing stdp figure strength stdp shown difference spike timings tpost tpre synapse maintains eligibility trace value updated according stdp decay rate eligibility trace set dirac delta function ensures stdp effects eligibility trace either pre post synaptic neuron fires figure demonstrates eligibility trace single synapse simultaneous firing eligibility trace becomes negligible maximum one second coincident firing means action robot currently performing forgotten time see section details memory extended moving dopamine response eligibility trace time eligibility trace time figure eligibility trace synapse neuron fires neuron fires firing occurs eligibility trace decays every millisecond synaptic weights updated according synapse strength current level dopamine level dopamine increased every dopaminergic neuron fires effect increasing level dopamine approximately every time food item reached majority dopaminergic neurons fire experiments contain poison poison effect reducing level dopamine every dopaminergic neuron fires consequence synapses highly active lead poison high eligibility trace synaptic weight reduced delay dopaminergic neuron firing corresponding increase dopamine used gives enough time eligibility trace synapses projecting dopaminergic neurons updated corresponding spike dopamine baseline level dopamine system set meaning absence firing dopaminergic neurons level dopamine would decay negative value effect working like hunger robot collect food long time strong synaptic weights become weaker weak synaptic weights become stronger allows robot change behaviour current strategy effective collecting food constant current supplied dopaminergic neurons induced background firing rate values higher robot able learn correctly talked depth section described section dopaminergic neurons two patterns activity background firing stimulus induced burst firing result background firing significant effect level dopamine simulate level dopamine increased five dopaminergic neurons fired window effect discussed section moving dopamine response conditioned stimulus many scenarios agent need perform sequence behaviours order get reward dopamine released reward achieved plasticity occur time efficacy stdp drops exponentially negligible therefore reward predicting behaviour takes place reward received behaviour learnt example environment robot run halfspeed average time entering food collected therefore attraction behaviour would take long time learn dopamine released food collected robot needed learn third behaviour picking key open container likely temporal difference behaviour actual reward would great ever learnt simple method deal problem agent learn elicit dopamine response receives reward predicting stimulus example often stimulated prior food item collected hence spike dopamine received robot learnt produce dopamine spike stimulated robot able learn attraction behaviour much faster long delay performing behaviour receiving dopamine response seen process could repeated long chain behaviours could learnt moving dopamine response achieved connecting container neurons dopaminergic neurons allowing synapses plastic figure shows example robot able strengthen dopaminergic neuron connections firings two neurons plotted one group index one dopaminergic group index robot enters food container neuron fires purely chance dopaminergic neuron fires later due background firing causes eligibility trace two neurons jump later robot reaches food item dopaminergic neurons fire causing spike level dopamine spike dopamine combined still positive eligibility trace increase weight synapse since synaptic weight higher increases probability dopaminergic neuron fire neuron future weight increased ability robot learn dopamine response relies combination background firing rate dopaminergic neurons strength long term potentiation compared long term depression background firing low chance dopaminergic neuron firing soon neuron small robot take long time learn background firing rate high level long term depression also high random uncorrelated firings sensor neurons dopaminergic neurons eligibility trace shift dopamine example eligibility trace time firings time seconds dopamine time seconds synaptic strength time seconds neuron index dopamine synaptic weight figure activity two neurons plotted one index sensor group one index dopaminergic neuron group robot enters food container neuron spikes chance dopaminergic neuron fires soon raises eligibility trace robot collects food item level dopamine spikes dopamine spike combined positive eligibility trace increase weight synapse neuron dopaminergic neuron average cause synapses eligibility traces become negative cancel potentiation synapses consider would happen threshold number neurons required level dopamine raised equivalently dopaminergic neurons brain produced bursting activity background firing see predicting stimulus also synaptic weights synapses projecting dopaminergic neurons potentiated due feedback loop created example chance neuron fires dopaminergic neuron eligibility trace increase robot immediately receive increase dopamine due dopaminergic neuron firing increase probability next time sensor neuron triggered dopaminergic neuron fires afterwards resulting synaptic weight increasing even parameters model altered plasticity happened much slower rate feedback loop could avoided small increase synaptic weight previous example would alter probability neurons firing correlated manner much therefore would likely dopaminergic neuron would point fire sensor neuron cause synaptic weight decrease however advantage method thresholding dopaminergic firing rate needed increase level dopamine robot robust learning well able learn much faster network stability stdp inherently unstable mechanism synaptic weight increased probability neuron firing neuron becomes increased probability synaptic weight potentiating increased feedback mechanism lead synapses quickly becoming potentiated maximum value combat two mechanisms used synaptic weights excitatory synapses restricted range addition local dampening mechanism used based observation brain highly active group neurons excitability decreased time discussed section synapses divided three distinct groups synapses belonging three groups food taxis group food neuron motor neuron synapses container taxis group container neuron motor neuron synapses container dopamine group container neuron dopaminergic neuron synapses groups mean synaptic weight becomes potentiated half maximum value synapses within group weights reduced prevents feedback loop synaptic weights become potentiated maximum value benefit approach lead competition behaviours within synaptic group figure demonstrates competition occur food attraction food avoidance behaviours synaptic competition example synaptic weight food attraction synapses food avoidance synapses time seconds figure example synaptic weight dampening lead competition behaviours synaptic weights food attraction food avoidance behaviour potentiated food attraction synaptic weights potentiating faster seconds average synaptic weight groups reaches maximal allowable value weights reduced food attraction synaptic weights increase faster rate next time maximal average synaptic weight reached food attraction behaviour increased food avoidance behaviour increased less way strength food avoidance behaviour becomes reduced time whilst strength food attraction behaviour increased taxis robot architecture constructed potential taxis behaviour example food attraction poison avoidance achieved two sensors front left right robot wired two motors wiring robot predispose prefer attraction avoidance behaviour equal number connections sensor motor acquisition behaviour left stdp strengthen synaptic connections corresponding behaviour learnt figure shows strong connections opposite sensors motors cause food attraction taxis behaviour left sensor active right motor active right sensor active left motor active left sensor active right motor active figure demonstration wiring left sensor right motor right sensor left motor cause food attraction taxis left sensor active robot turns forward left right sensor active robot turns forward right finally robot turns collects food item exploration ensure robot explores environment robot needs kind exploratory behaviour implemented every randomly choosing either right left motor remaining motor neurons stimulated current taken poisson distribution mean one possible way exploration could implemented directly network would motor neurons connected oscillating see section details oscillating behaviour occur mechanism implemented left right motors would effect one population neurons would active whilst motor neurons phasically active motor neurons become phasically inactive would easier mechanism switch active group network would display properties implemented exploration motor stimulation synaptic weights sensors motors increased enough current motors sensors override exploration current effect robot learns anything perform random walk learning robot perform random walk whenever sense anything also environment changes causing synaptic weights sensors motors drop robot back performing random walk path robot environment food run seconds shown figure seen robot able effectively explore environment path robot performing random walk position position figure path taken robot seconds environment food random motor stimulation causes environment explored effectively chapter results discussion orbiting behaviour interesting property robot environment thing fixed set optimal synaptic weights almost scenarios environment best set synaptic weights would left sensor right motor right sensor left motor synapses set maximum weight sensor motor synapses set zero cause robot turn quickest towards food however robot turning circle situations robot orbit food item indefinitely figure shows path robot using synaptic weights approaches food item degrees time step robot senses food left sensor turns left however never reaches food optimal robot needs determine orbiting food modify behaviour accordingly path robot orbiting food position food robot path orbits food start position position figure path robot shown blue robot position plotted every robot sense food left sensor turns left continues sense food left sensor continues turning left however turn fast enough reach food ends orbiting experimental demonstrate ability robot learn modify behaviour gets stuck one orbits robot placed environment one food item synaptic weights opposite set maximum allowable value sensor motor synapses set zero forces robot initially perform food attraction behaviour robot positioned way would start orbiting food item shown figure first scenario none robot synapses plastic learn robot simulated seconds repeated times second scenario robot synapses modified using starting setup orbiting environment position food robot start position orientation position figure starting position environment force robot start orbiting stdp robot allowed learn robot simulated seconds trials results discussion none trials learning disabled robot manage break orbiting behaviour learning enabled robot managed learn stop orbiting food able collect food item trials average took robot seconds stop orbiting food distance food first seconds one trial plotted figure figure shows robot able modify behaviour stop orbiting less seconds food item collected seconds food collected replaced random location robot collects food one time distance robot food time distance robot stops orbiting food food items collected time seconds figure distance robot food item plotted time synaptic weights initially set force food attraction behaviour plasticity turned robot starts orbiting food seconds able unlearn orbiting behaviour stops orbiting food see relearning achieved mean synaptic weights sensors motors time plotted figure several interesting properties shown firstly synaptic weights left sensor right motor slowly decaying whilst robot orbiting food baseline level dopamine system small negative value therefore highly active synapses strong weights synaptic weight reduced due high eligibility trace effect reducing strength behaviour robot currently performing case turning left left food sensor activated negative dopamine thought analogous hunger whatever robot currently giving food incentive change behaviour notice right sensor left motor synaptic weights remain strong due fact right sensor neurons stimulated eligibility trace remains small whilst robot orbiting synaptic weights left sensor left motor also increased left motor neurons still randomly stimulated exploration method described section causes random firings left motor population uncorrelated left sensor neuron firings parameters stdp used favour long term depression long term potentiation eligibility trace synapses negative given negative eligibility trace negative dopamine value increase synaptic weights mean sensor motor synaptic weights orbiting environment mean synaptic weight mean synaptic weight mean synaptic weight mean synaptic weight left sensor left motor time seconds left sensor right motor food collected orbiting stopped time seconds right sensor left motor time seconds right sensor right motor time seconds figure mean synaptic weights left right groups shown robot environment one food item robot initially orbiting food seconds orbiting unlearned seconds robot collects food item figure shows neural firing rate sensor neuron groups motor neuron groups trial shows synaptic weights left sensor right motor decreased firing rate right motor also decreases equivalently synaptic weights left sensor left motor increased firing rate left motor neurons seconds combination increased synaptic weights left motor random exploration stimulus motors means firing rate left motor greater firing rate right motor robot turns away food item robot performs random walk sense food food collected notice left sensor right motor synaptic weights increased figure left sensor left motor synaptic weights decreased robot starts relearn food attraction behaviour neuron firing rate orbiting environment mean firing rate left sensor right sensor left motor right motor time seconds figure mean firing rate neurons sensor motor groups single trial orbiting environment seconds left motor firing increased enough right motor firing decreased enough robot stops orbiting food confirm ability relearn current behaviour working relies baseline level dopamine negative experiment baseline level dopamine set none trials robot manage escape orbiting food figure shows synaptic weights one trials figure shows positive level background dopamine effect reinforcing behaviour robot currently performing shown increase weights left sensor right motor synaptic weights orbiting environment positive baseline dopamine mean synaptic weight mean synaptic weight mean synaptic weight mean synaptic weight left sensor left motor time seconds left sensor right motor time seconds right sensor left motor time seconds right sensor right motor time seconds figure synaptic weights sensors motors baseline level dopamine positive set orbiting environment positive dopamine acts reinforce current behaviour left sensor right motor synaptic weights increase food attraction learning section test ability robot learn behaviour modifying synaptic weights experimental environment consisting randomly placed food items constructed food item collected moved random position environment robot placed environment simulated seconds plasticity taking place sensor neurons motor neurons process repeated trials time resetting sensor motor synaptic weights zero comparison robot connections sensor neurons motor neurons plasticity run trials seconds environment effect robot performs random walk collects food randomly driving results discussion mean food collection rate time robot learning enabled random walk robot shown figure robot learning enabled able rapidly learn turn towards collect food items high collection rate remained stable rest trial average robot took seconds full food attraction behaviour learnt stabilized food collection rate environment collection rate collections per second robot learning disabled robot learning enabled time seconds figure food collection rate time robot environment randomly placed food items table shows mean food collected trials whether robot able correctly learn food attraction behaviour robot deemed learnt attraction behaviour end trial synapses food attraction left sensor right motor right sensor left motor average well average higher synapses food avoidance values high enough exploration behaviour overridden different enough food attraction preferred robot able learn behaviour trials learning enabled robot type learning disabled learning enabled mean food collected std deviation food collected correct table table shows total amount food collected robots averaged trials well percentage robots able correctly learn food attraction behaviour figure shows mean weights sensors motors averaged trials synaptic weights opposite rapidly increased reach maximum weight seconds average causes robot perform food attraction behaviour reflected increase food collection figure robot sensor motor connection enviroment mean synaptic weight mean synaptic weight mean synaptic weight mean synaptic weight left sensor left motor time seconds left sensor right motor time seconds right sensor left motor time seconds right sensor right motor time seconds figure graphs show mean synaptic weights sensor motor neurons averaged trials environment left right right left connection weights rapidly increased maximum value causing behaviour demonstrate robot able learn perform food attraction behaviour mean eligibility trace synapses left sensor motors first seconds one trials plotted figure start synaptic weights set zero robot performs random walk notice eligibility traces fluctuate neither consistently higher robot collects food item likely recently turning towards food item left sensor right motor neurons active correlated left sensor stimulated followed right motor neurons active means eligibility trace likely higher left sensor right motor food item collected start seen seconds noticeable seconds synaptic weights significantly modified dopamine level high food collected left sensor right motor connections become strengthened turn means robot likely turn towards food items left sensor right motor connection strength even likely increased action reinforcement food attraction synaptic connections quickly become saturated maximum value sensor motor eligibility traces environment left left motor left right motor food collected eligibility trace time seconds figure mean synaptic eligibility trace left sensor left motor neurons left sensor right sensor neurons vertical lines display food collected initially weights zero robot performs random walk probability eligibility trace higher left sensor left motor connections food achieved acts increase connections acts feedback loop robot likely turn towards food left sensor left motor eligibility trace even likely positive notice figure left sensor left motor right sensor right motor synaptic weights also increase level lower value reason negative baseline level dopamine figure shows eligibility trace left sensor motor synapses single trial well synaptic weights synapses initially synaptic weights left sensor left motor small firing left sensor neurons left motor neurons uncorrelated causing negative eligibility trace negative baseline level dopamine acts increase synaptic weights synapses negative eligibility trace synaptic weights left sensor left motor increase neurons start fire correlated manner stimulating left motor causes firings left motor turn causes eligibility trace rise eventually around seconds eligibility trace rises point positive dopamine spikes caused food act cancel negative baseline level dopamine point left sensor left motor synaptic weights stabilize left sensor right motor synapses also subject process however main cause potentiation synapses comes large spikes dopamine food collected large spikes dopamine occur left sensor right motor eligibility traces highest override effects small negative baseline dopamine final synaptic weights sensor neuron motor neuron synapses single trial shown figure couple interesting properties note weights firstly lot synapses either fully potentiated weight set fully depressed weight due feedback mechanism inherent stdp synaptic weight starts become potentiated greater chance potentiated future second interesting property individual motor neurons differentiated either respond left sensor respond right sensor occurs sensor neurons stimulated synapse left sensor neuron left motor neuron becomes potentiated likely left motor neuron fire left sensor neurons stimulated synapses left sensor left motor neuron become potentiated robot learnt attraction behaviour common sequence neural firings left right right left left right motor seen motor neuron reacting strongly left sensor likely fire right sensor stimulated acts depress connections right motor keep neuron differentiated sensor motor eligibility trace environment left left motor left right motor eligibility trace time seconds eligibility trace single trial robot sensor motor connections environment single trial mean synaptic weight left sensor left motor left sensor right motor time seconds synaptic weights single trial figure graphs show eligibility trace synaptic weights left sensor neurons groups motor neurons first seconds single trial environment initially left sensor left motor eligibility traces highly negative drives synaptic weights milliseconds left sensor left motor eligibility traces risen enough synaptic weights stable increase neuron index sensor motor synaptic weights training single trial connection exists neurons neuron index figure synaptic weights sensors motors training environment plotted motor neuron indices left axis range left motor right motor sensor neuron indices bottom axis range left sensor right sensor neurons either fully potentiated black fully depressed white motor neurons differentiated either respond left sensor neurons right sensor neurons evident horizontal banding plasticity learning robot learnt behaviour able modify behaviour environment changes demonstrated replacing food items poison robot learnt food attraction behaviour experimental robot placed environment consisting food items seconds giving robot enough time learn food attraction behaviour food items replaced poison produces negative dopamine response robot run seconds trial run times comparison robot learning disabled also run environmental trials results discussion collection rate time dynamic environment shown figure robot able unlearn food attraction behaviour almost immediately see rapid relearning takes place useful examine eligibility trace point poison introduced mean eligibility trace connections left sensor left right motors period food switched poison shown figure due fact robot learnt food attraction behaviour strong connections left sensor right motor means eligibility trace two sets neurons high stimulation left sensor always causes response activity right motor high eligibility trace means poison introduced negative spike dopamine received rapid decrease weights left sensor right motor shown figure mean synaptic weights sensors motors plotted averaged trials also reduction synaptic weights left sensor left motor right sensor right motor synapses synaptic connections become potentiated maximum value high eligibility trace drop weights saturated connections greater increase synaptic weight uncorrelated connections thing note food collection rate environment poison introduced rate food collection marginally less robot performing random walk reflected synaptic weights avoidance marginally higher seen figure food collection rate environment robot learning disabled robot learning enabled collection rate collections per second time seconds figure collection rate time robots environment food items seconds food items replaced poison learning robot quickly unlearns food attraction behaviour poison introduced sensor motor eligibility trace environment single trial food collected poison collected left left motor left right motor eligibility trace time seconds figure mean eligibility trace synapses left sensor left right motors shown food items replaced poison poison introduced eligibility trace left sensor right motor high poison collected exactly point left sensor right motor eligibility traces highest poison collected food attraction unlearnt eligibility trace fluctuates around zero robot sensor motor connections environment left sensor left motor left sensor right motor right sensor left motor right sensor right motor mean synaptic weight time seconds synaptic weights environment robot sensor motor connections environment left sensor left motor left sensor right motor right sensor left motor right sensor right motor mean synaptic weight time seconds synaptic weights environment figure mean synaptic weights sensors motors averaged trials foodpoison environment shown shows poison introduced synaptic weights rapidly reduced shows last seconds shows average food avoidance synaptic weights slightly higher food attraction due fact robot rewarded avoiding poison terms increase dopamine therefore synapses corresponding poison avoidance potentiated sensor motor synapses end weights low values situation exploration stimulus override current provided sensor motor connections one possible way get robot learn food avoidance behaviour would positive baseline level dopamine though shown section would prevent robot escaping orbiting behaviour research would needed find solution works problems dopamine response secondary stimulus section ability robot learn elicit positive dopamine response new stimulus repeatedly paired dopamine inducing stimulus tested experimental demonstrate reward predicting dopamine response environment consisting food items containers used see section detail architecture robot changed environment environment consists containers contain food containers contain food environment visualized figure robot different sensor detecting entering food containing container empty container however left right container sensors distinguish two container types start experiment sensor motor connections set food container attraction behaviour figure shows synaptic weights set force attraction behaviour result attraction behaviour robot visit empty food containing containers equally often inside food container robot turn towards consume food item environment food containers empty containers food location location figure environmental demonstrate secondary stimulus dopamine learning consisting food items synaptic weights two container neurons dopaminergic neurons initially set zero plasticity turned connections robot placed environment run second trial trial repeated times left food sensor neurons right food sensor neurons left container neurons synapses set maximum right container neurons synapses set zero left motor neurons right motor neurons figure starting weights robot force attraction behaviour results discussion trials dopaminergic neuron synapses potentiated four times dopaminergic neuron synapses end trial figure shows mean synaptic weight time food container dopaminergic neuron synapses empty container dopaminergic neuron synapses seen synapses reward predicting stimulus dopaminergic neurons potentiated levelling maximum value predicting stimulus contrast little potentiation synaptic connections dopaminergic neurons remaining stable zero connection strength container sensors dopamine neurons food container dopamine empty container dopamine mean synaptic weight time seconds figure mean synaptic weight time neurons dopaminergic neurons well neurons dopaminergic neurons trials see effect dopamine thresholding described section dynamics system experiment without requiring five dopaminergic neurons fire order raise level dopamine effect background firing dopaminergic neurons raise level dopamine dopamine thresholding switched dopaminergic neuron synaptic weights end trial higher dopaminergic neuron synaptic weights trials compared thresholding enabled synaptic weights experiment without thresholding plotted figure note essence two processes happening first seconds sets synapses increased see sets synaptic weights increased reward predicting set increased useful plot eligibility trace single synapse figure eligibility trace firing rate synaptic strength plotted single synapse neuron single dopaminergic neuron container neuron happens fire dopaminergic neuron purely chance due background firing rate causes eligibility trace jump second neuron dopaminergic neuron firing immediately causes spike level dopamine connection strength container sensors dopamine neurons threshold mean synaptic weight food container dopamine empty container dopamine time seconds figure synaptic weights sensor dopaminergic neurons sensor dopaminergic neurons shown experiment dopamine threshold single dopaminergic neuron spiking could raise level dopamine would happen dopamine thresholding higher dopamine level combined high eligibility trace cause synaptic weight increase could described double feedback loop increase synaptic weight means two neurons likely fire sequence future meaning eligibility trace likely increase increase synaptic strength means dopaminergic neuron likely fire hence increase level dopamine exactly point raise strength synapse consider happens reward predicting stimulus neuron coincidentally fires dopaminergic neuron increase dopamine synaptic weight described also likely large spike dopamine within due food item collected effect raising synaptic weight even since eligibility trace still positive explains rate increase synaptic weights reward predicting stimulus dopaminergic neurons faster predicting stimulus second thing note figure seconds synaptic weights dopaminergic synapses start decrease due homeostatic mechanism described section combined synaptic weights synapses leading dopaminergic neurons limited limit reached synapses leading dopaminergic neurons weight reduced food container dopaminergic synapses increase weight faster empty container dopaminergic synapses next time weights reduced empty container synapses reduced increased time interval without dopamine thresholding synaptic weight dampening would nothing stop dopaminergic neuron synapses potentiating maximum value using techniques robot robust able correctly learn produce spike dopamine stimulus non stimulus eligibility trace neuron index dopamine synaptic weight eligibility trace time firings time seconds dopamine time seconds synaptic strength time seconds figure example dopaminergic neuron synapses become potentiated neuron sensor neuron neuron dopaminergic neuron coincidentally fire corresponding spike dopamine causes synaptic weight increase secondary behaviour learning section explore ability robot learn secondary behaviour behaviour give direct reward succeeded another behaviour lead reward behaviour learnt attraction driving towards advantageous behaviour entering often leads food collected robot subsequently performs behaviour experimental environment consisting containing single item food created positioned randomly moved random location whenever associated food item collected five different robots run environment trials seconds robots except benchmark robot initial sensor motor synaptic weights set food attraction behaviour preference container set result initially robots wander randomly whilst outside containers whilst inside containers robots turn towards collect food items note unlike environment environment fixed optimal robot size means robot get stuck orbiting also whilst robot starts orbiting food item orbit take container orbiting problem environment set fixed optimal robot benchmark dynamics five robots learning disabled plasticity switched synapses result robot wanders randomly whilst outside containers whilst inside containers robot turns collects food items container dopamine connections container dopaminergic neurons set zero plasticity switched synapses dopamine response received container stimulated plasticity enabled container motor synapses container dopamine learning enabled connections container dopaminergic neurons initially set zero plasticity enabled connections allowing become potentiated robot learn dopamine response entering container motor synapses plastic fixed high container dopamine connections container dopaminergic neurons set maximum value plasticity switched synapses forces spike dopamine whenever robot enters container motor synapses plastic benchmark robot plasticity switched food attraction synapses set maximal value well attraction synapses sensor motor synapses set zero results discussion figure average score time five robots plotted correspondingly total food collected well percentage robots learnt attraction behaviour end trial shown table robot deemed learnt behaviour end trial left container right motor neuron synapses stronger average well stronger average left container left motor neuron synapses equivalent also true right container synapses robot deemed correct food collection rate environment benchmark fixed high learning enabled container dopamine learning disabled collection rate collections per second time seconds figure averaged score time five different robots run environment contained containers several interesting properties shown figures firstly note robots managed perform better robot learning disabled performed random walk whilst outside containers means even robot receive dopamine container stimulated container dopamine able learn attraction behaviour table see managed learn container attraction cases due fact average entering container food collected small enough stdp still small effect potentiating synapses cause container attraction behaviour next thing notice robot fixed strong weights sensors dopaminergic neurons fixed high container dopamine robot learn robot type learning disabled learning enabled fixed high containerdopamine benchmark mean food collected std deviation food collected correct table table shows total amount food collected three robots averaged trials tiate synapses container dopamine learning enabled perform better robot without dopamine response able learn attraction behaviour cases shows able learn elicit dopamine response stimulus helps able learn sequence behaviours needed agent get reward expected robot starts dopaminergic neurons already fully potentiated able learn attraction behaviour faster robot learn potentiate neurons stimulating elicit dopamine response figure shows mean synaptic weight dopaminergic neurons robot container dopamine plasticity enabled seen robot able correctly potentiate synaptic weights synapses potentiate robot able learn foodcontainer attraction behaviour shown fact food collection rate robot slowly rises equal robot container dopaminergic synapses strong start dopaminergic neurons synaptic weight mean synaptic weight time seconds figure average synaptic weight neurons dopaminergic neurons trials shown container dopamine learning enabled robot last thing note food collection rate figure none robots learning enabled able match food collection rate optimal robot see case synaptic weights container motor synapses plotted figure robot fixed high container dopamine response fixed high mean synaptic weights container attraction plateau value less maximum possible connection probability maximum single synapse weight maximum mean synaptic weight group sensor neurons group motor neurons reason level lower value due motor neuron differentiation discussed section figure motor neurons differentiated respond behaviour become potentiated addition synapses container avoidance also synaptic weights increased though lesser extent due negative baseline level dopamine mechanism discussed depth section two mechanisms needed robot able modify behaviour environment changes direct consequence ability cope dynamic environment robot perform well robot fixed optimal set weights environment greater difference synaptic weights allows benchmark robot respond weak sensory input edge sensors range perform taxis away container range sensor motor synaptic weights fixed high robot mean synaptic weight left sensor left motor left sensor right motor right sensor left motor right sensor right motor time seconds figure mean synaptic weights container sensor neurons motor neurons fixed high robot environment interesting look happens case robot learn correct behaviour figure shows mean synaptic weights time left container sensor motors single trial fixed high container dopamine robot robot failed learn correctly seen figure synaptic weights sensor neurons motors close value strong synaptic weights increased decreased synchronous fashion reason left sensor connected strongly motors stimulating left sensor effect motor neurons fire response means eligibility traces sensor two motors increase together synaptic weights increase decrease together way synapses avoidance attraction become potentiated hard robot learn increase synaptic weights needed attraction behaviour left container sensor synaptic weights failed learning mean synaptic weight left sensor left motor left sensor right motor time seconds figure synaptic weights left container sensor left right motor neurons shown example robot unable learn container attraction behaviour weights become synchronized rise fall together cause sets synaptic weights become high first place still possible robot collect food items even performing container attraction behaviour example robot constantly drive straight purely chance run food container synaptic weights sensors motors increased due random exploration may happen occur times row point robot get stuck performing behaviour dual behaviour learning finally demonstrate ability robot learn two different sequential behaviours time behaviours experimental environmental section used food containers randomly positioned environment three different robots run environment trials seconds trial three robots random walk robot sensor motor connections set zero plasticity switched food collected randomly driving benchmark robot synaptic weights behaviour set maximum value sensor motor synaptic weights set zero plasticity switched learning enabled initially sensor motor synapses set zero plasticity enabled synapses addition dopaminergic neuron synapses initially set zero plasticity enabled synapses results discussion average food collection rate time three robots shown figure collection rate learning robot seconds comparable robots behaviour previous section see figure learning robot able learn behaviour definition correctness sections used trials slightly worse figure achieved behaviour expected longer periods food collection chance random fluctuations synaptic weight cause two sensor motor groups become synchronized described section figure mean synaptic weights learning robot averaged trials shown first behaviour learnt shown quick potentiation corresponding synapses response behaviour learnt parallel behaviour reinforced corresponding increase release dopamine food collection rate environment dual behaviour learning collection rate collections per second benchmark learning enabled random walk time seconds figure collection rate time averaged trials robots shown robot learning enabled increases collection rate time food sensor motor connections environment dual learning mean synaptic weight left food sensor left motor left food sensor right motor right food sensor left motor right food sensor right motor time seconds food sensor motor synaptic weights container sensor motor connections environment dual learning mean synaptic weight left container sensor left motor left container sensor right motor right container sensor left motor right container sensor right motor time seconds container sensor motor synaptic weights dopaminergic synaptic weights environment dual learning mean synaptic weight time seconds dopaminergic neurons synaptic weights figure synaptic weights dual learning robot environment shows mean synaptic weights food motors synapses behaviour quickly potentiated shows mean synaptic weights rangesensors motors synapses behaviour potentiated slower rate shows mean synaptic weight dopaminergic neurons behaviour learnt synapses slows potentiated chapter conclusions reinforcement learning comparing properties implementation classical reinforcement learning algorithms outline given section see several properties model correlates reinforcement learning algorithms obvious eligibility trace stored within synapses directly correlates eligibility trace defined standard reinforcement learning algorithms sarsa role eligibility trace situations allow agent learn much faster greater time frame case novel exploration strategy whereby randomly stimulate motors small current considered analogous reinforcement learning algorithms whereby probability choosing action related expected reward implementation strength behaviour increases confidence produce best reward increases probability performing exploratory behaviour decreases reinforcement learning algorithms value function propagated state space example initially goal state positive value associated repeated exploration environment states close goal state positive value function similar way network able learn dopaminergic neural response stimulus often precede food collection key difference implementation reinforcement learning algorithms standard reinforcement learning algorithms value function updated based difference expected received reward example value function predicts reward agent subsequently receives reward value function decremented value function thought analogous strength connections dopaminergic neurons model used paper able produce negative response trained stimulus subsequently stimulated paired reward example robot trained environment remove food items network still elicit dopamine response stimulated attraction behaviour become unlearnt main limitation implementation possible solutions discussed section implementation provides method reinforcement learning continuous parameter spaces standard reinforcement learning algorithms designed state actions spaces discretized able operate continuous spaces directly allows agent generalize much easily previously unseen environmental state encountered biological plausibility conclusions paper shown allowing baseline level dopamine negative robot able relearn current behaviour producing reward brain level dopamine become negative though may another type pain neurotransmitter acts reverse way dopamine effect reducing highly active synapses though whether one exists currently unknown one consequence model time motor neurons tended differentiate respond single sensor interesting property provides mechanism group neurons brain self organize functionally different subgroups brain shown dopaminergic neurons two different states activity background firing bursting activity stimulated background firing result significant level dopamine increase model shown separating dopaminergic neuron activity two distinct groups robot able learn dopamine response rewardpredicting stimulus much reliably provides one possible explanation dopaminergic neurons two different firing states chapter future work several mechanisms robot used explicitly programmed separate neural network would advantageous provide implementation within spiking neural network subject plasticity modified response environmental changes along rest network mechanisms possible snn implementation given phasic activity sensor neurons stimulated rate rather given constant input current eeg recordings show brain also exhibits neural oscillatory behaviour group interconnected inhibitory neurons provided constant input current generate oscillatory firing group neurons could used control group connected sensory neurons forcing fire oscillatory manner explicitly implemented winner takes mechanism sensors motors one way could possibly implemented directly snn shown figure exploration explicitly stimulated motors provide exploratory behaviour however phasic activity mechanisms implemented directly network described explicit exploration behaviour would needed combination background firing would ensure one set motor neurons active one time whilst phasic activity would allow active motor optionally switch inactive phase neurons simulus stimulus stimulus neurons stimulus neurons inhibitory group inhibitory group external inputs excitatory neurons inhibitory neurons figure example mechanism implemented snn set stimulus neurons acts inhibit network reaches stable state one group active main limitation approach taken paper robot unlearn already learnt dopamine response able fuller snn implementation algorithm would need implemented whereby dopamine response would correspond difference expected reward perceived reward able requires snn memory chorley seth shown network architecture level dopamine corresponds difference perceived expected reward use model memory future work would incorporate network architecture robot allow sequential behaviours unlearnt another obvious extension work paper would implement control architecture real rather simulated robot case robot would need able deal collision avoidance could possibly solved similar reinforcement learning mechanism described paper bibliography izhikevich solving distal reward problem linkage stdp dopamine signaling cerebral cortex chorley seth closing loop dopamine signalled reinforcement learning proceedings international conference simulation adaptive behavior animals animats berlin heidelberg sab iannella back spiking neural network architecture nonlinear function approximation neural networks hodgkin huxley quantitative description membrane current application conduction excitation nerve journal physiology brunel van rossum lapicques paper frogs biological cybernetics fitzhugh impulses physiological states theoretical models nerve membrane biophysical hindmarsh rose model neuronal bursting using three coupled first order differential equations proceedings royal society london series biological sciences izhikevich simple model spiking neurons neural networks ieee transactions rosenblatt principles neurodynamics spartan books widrow stearns burgess adaptive signal processing edited bernard widrow samuel stearns journal acoustical society america werbos beyond regression new tools prediction analysis behavioral sciences thesis harvard university cambridge sutton barto reinforcement learning introduction adaptive computation machine learning mit press bienenstock theory development neuron selectivity orientation specificity binocular interaction visual cortex journal neuroscience markram lbke frotscher sakmann regulation synaptic efficacy coincidence postsynaptic aps epsps science song abbott cortical development remapping spike plasticity neuron izhikevich desai relating stdp bcm neural computation otmakhova lisman dopamine receptor activation increases magnitude early potentiation hippocampal synapses journal neuroscience chinta andersen dopaminergic neurons international journal biochemistry cell biology miller freedman wallis prefrontal cortex categories concepts cognition ping shepard channels regulate pacemaker activity nigral dopamine neurons neuroreport overton clark burst firing midbrain dopaminergic neurons brain research reviews lisman bursts unit neural information making unreliable synapses reliable trends neurosciences matsumoto hikosaka two types dopamine neuron distinctly convey positive negative motivational signals nature frey morris synaptic tagging potentiation nature nature ppper kempter leibold synaptic tagging evaluation memories distal reward problem learning memory redondo morris making memories last synaptic tagging capture hypothesis nature hull principles behavor introduction behavior theory pavlov conditioned reflexes oxford university press pan schmidt wickens hyland dopamine cells respond predicted events classical conditioning evidence eligibility traces network journal neuroscience ljungberg apicella schultz responses monkey midbrain dopamine neurons delayed alternation performance brain research chorley seth reward predictions generated competitive excitation inhibition spiking neural network model frontiers computational neuroscience brown bullock grossberg basal ganglia use parallel excitatory inhibitory learning pathways selectively respond unexpected rewarding cues journal neuroscience tan bullock local circuit model learned striatal dopamine cell responses probabilistic schedules reward journal neuroscience schultz romo dopamine neurons monkey midbrain contingencies responses stimuli eliciting immediate behavioral reactions journal neurophysiology schultz predictive reward signal dopamine neurons journal neurophysiology ljungberg apicella schultz responses monkey dopamine neurons learning behavioral reactions journal neurophysiology turrigiano homeostatic plasticity neuronal networks things change stay trends neurosciences turrigiano leslie desai rutherford scaling quantal amplitude neocortical neurons nature adrian zotterman impulses produced sensory nerve endings part response single end organ journal physiology maunsell van essen functional properties neurons middle temporal visual area macaque monkey selectivity stimulus direction speed orientation journal neurophysiology zador charles enigma brain current biology braitenberg vehicles experiments synthetic psychology mit press indiveri neuromorphic analog vlsi sensor visual tracking circuits application examples circuits systems analog digital signal processing ieee transactions lewis cohen hartmann toward biomorphic control using custom avlsi cpg chips robotics automation proceedings icra ieee international conference vol hagras colley callaghan clarke evolving spiking neural network controllers autonomous robots robotics automation proceedings icra ieee international conference may vol bouganis shanahan training spiking neural network control robotic arm based spike plasticity neural networks ijcnn international joint conference july vasilaki fremaux urbanczik senn gerstner reinforcement learning continuous state action space policy gradient methods fail plos computational biology gupta long hebbian learning winner take spiking neural networks neural networks ijcnn international joint conference june timofeev bazhenov thalamocortical oscillations scholarpedia
9
international journal foundations computer science technology ijfcst vol november comparative study remote tracking parkinson disease progression using data mining methods peyman abdolreza hatamlou mohammad department computer engineering science research branch islamic azad university west azerbaijan iran islamic azad university khoy branch iran department computer engineering urmia branch islamic azad university urmia iran abstract recent years applications data mining methods become popular many fields medical diagnosis evaluations data mining methods appropriate tools discovering extracting available knowledge medical databases study divided data mining algorithms five groups applied dataset patient clinical variables data parkinson disease study disease progression dataset includes properties people algorithms applied dataset decision table correlation coefficients best accuracy decision stump correlation coefficients lowest accuracy keywords data mining knowledge discovery pattern recognition parkinson disease introduction parkinson disease chronic neurological disorder unknown etiology usually affects people years old reported parkinson beyond many unknown cases affected one million people expected age earth population increases number patients also increases number parkinson disease patients estimated every people although percentage hence number affected people increasing life expectancies increase causes parkinson disease unknown however researches shown degradation dopaminergic neurons affects dopamine production symptoms include limb tremor especially rest muscles stiffness movement slowness difficulty walking balance coordination especially beginning motion difficulty eating swallowing vocal disorder mood disorders disease causes voice disorder patients dysarthria motor speech disorder indicates inability expression properly observable patients hypokinetic dysarthria classical symptoms include reduced vocal loudness hypophonia monopitch disruption voice quality abnormally fast rate speech decades researchers strived understand international journal foundations computer science technology ijfcst vol november disease therefore find methods successfully limiting symptoms commonly periodic muscle tremor rigidity symptoms akinesia bradykinesia dysarthria may occur later stages recent years development innovation fusion data equipment sensors scientific medicine areas led production massive data values modern medicine produces massive amounts data stored databases whose use large values data timeconsuming cases impossible regard use data mining methods perform difficult task course data mining methods limitations constraints task compare methods algorithms choose best useful method term used data mining introduced however data mining progress improvement field long history datasets growing size applications complexity direct data analysis increasingly augmented indirect automatic data processing achieved discoveries computer science artificial neural networks anns classification clustering algorithms plans data mining techniques algorithms accomplished genetic algorithm decision trees dts also supported vector machine svm methods first use data mining techniques health information systems clinical application fulfilled expert systems developed since given knowledge experience sense physicians medicine decisions reviews done massive amount medicine data kept medical clinics health centers data mining methods appropriate tools discovering available knowledge database help physicians diagnosis treatment tracking process rest paper organized follows first explained data mining techniques algorithms used study next section data mining application healthcare field described section data mining algorithms categorized applied special dataset section results analysed discussed section finally concluding remarks proofs outcome well future research lines presented section ata mining database pertaining trade agriculture cyberspace details phone calls medical data etc collected stored rapidly development information technology data collection production methods since three decades ago human thinking new method access hidden data huge large database traditional systems able competition economic scientific political military fields importance access useful information among high amount data without human intervention caused data analysis science data mining established concept data mining developed late went years later data mining process discovering meaningful new correlations patterns trends sifting investigating large amounts data stored repositories using pattern recognition technologies well statistical mathematical techniques gartner group like mining gold huge rocks large amount soils data mining extracting beneficial knowledge bulky data sets entirely fine point data mining searching gaining knowledge obtain knowledge analysing reconsidering international journal foundations computer science technology ijfcst vol november many people consider data mining equivalent another commonly used term knowledge discovery data kdd alternatively others view data mining simply indispensable step process knowledge discovery typically steps knowledge discovery divided phases four primary steps data different forms data preparation data mining mentioned data mining fifth steps one necessary steps process finally two last steps task identifying useful patterns displaying user brief description steps data cleaning clean incompatible data noises data integration integrate multiple sources data selection restore data related analysis evaluate database data transformation transform modulate data searchable forms summarizing aggregating data mining process applying intelligent algorithms exploiting data patterns pattern evaluation identify related patterns knowledge presentation present extracted knowledge users using presentation techniques visual modeling therefore data mining step kdd process consisting particular enumeration patterns data subjected computational limitations term pattern goes beyond traditional concept notion include structures models data warehouses historical data used discover regularities improve future decisions goals data mining divide two general types predictive descriptive application predictive type understand application model attempts predict value certain variable may take given know present descriptive data mining characterizes generic attributes data database ata mining ealthcare today medical areas data collection different diseases important medical centers different purposes collect data survey data obtain useful results patterns relation disease one objectives use data collected data volume high must use data mining techniques obtain desired patterns results among massive volume data medical health areas one important sections industrial societies extraction knowledge among massive volume data related diseases people medical records using data mining process lead identifying laws governing creation development epidemic diseases allow expertise health area staffs access valuable data according environmental factors order identify diseases causes anticipate treat diseases finally means extending life comforting people community example medicine applications data mining predicting health care costs determination disease treatment analyzing processing medical images international journal foundations computer science technology ijfcst vol november analyzing health information system effect drugs disease side effects predicting success rate medicine operations like surgeries diagnosing predicting kind diseases cancer study published chang used data mining techniques predicting incidental hypertension hyperlipidemia presented analysis procedure simultaneously predict hypertension hyperlipidemia firstly chose six data mining approaches used algorithms select individual risk factors two diseases afterward determined common risk factors using voting principle next used multivariate adaptive regression splines mars method construct multiple predictive model hypertension hyperlipidemia study used data physical examination center database taiwan included subjects proposed predictor method study classification accuracy rate also eskidere two colleagues studied performance support vector machines svm least square support vector machines multilayer perceptron neural network mlpnn general regression neural network grnn regression methods application remote tracking parkinson disease progression found using regression estimated within points feature set within points feature set clinicians estimation one ascertainments current year daniel ansari enquired artificial neural networks anns nonlinear pattern recognition techniques used tool medical decision making based anns using clinical histopathological data patients flexible nonlinear survival model designed use anns predicting survival pancreatic cancer ann cox regression roposed methods implementation data mining wide common uses classification recognition problems health systems section describe materials methods used current research depending situation desired outcome various types data mining algorithms use experiments described paper performed using libraries weka machine learning environment building models data set randomly split two subsets data training set data test set data mining methods apply property data set parkinson telemonitoring dataset created athanasios tsanas max little university oxford collaboration medical centers intel corporation developed ahtd telemonitoring device record speech signals dataset made available online uci machine learning archive recently october dataset consists around recordings per patient people men women makes total voice recordings patient recorded phonations sustained vowel dataset contains attributes including subject number subject age subject gender time interval baseline recruitment data biomedical voice measures vocal features vocal features dataset international journal foundations computer science technology ijfcst vol november diverse based traditional measures jitter shimmer hnr nhr rpde dfa ppe based nonlinear dynamical systems theory dataset assessed baseline onset trial table description features updrs scores parkinson telemonitoring dataset trial periods voice recordings obtained weekly intervals hence motorupdrs linearly interpolated represents baseline three six months updrs scores feature labels short explanations measurement along basic statistics dataset functions simple linear regression slr simple linear regression model measurement error model subject extensive research statistical literature century well known simple linear regression model error predictor identifiable without extra information normal case statistics simple linear regression least squares estimator linear regression model single explanatory variable words simple linear regression fits straight line set points way makes sum squared residuals model vertical distances points data set fitted line small possible simple linear regression model observations generated according international journal foundations computer science technology ijfcst vol november intercept parameter slope parameter explanatory variables finally error terms obtain simple linear regression model without intercept see simple linear regression model shows model parameters figure simple linear regression model perceptron mlp mlp based single perceptron introduced rosenblatt network contains input layer hidden layer output layer neuron one layer receives weighted sum neurons previous layer provides input neurons later layer figure mlp structure used however merely single neuron output layer architecture neural network used research multilayer perceptron network architecture nodes hidden nodes approach showed mlp structure used study number input nodes determined ultimate data number hidden nodes determined trial error number output nodes represented rage demonstrating disease classification neuron one layer receives weighted sum neurons previous layer provides input neurons later layer international journal foundations computer science technology ijfcst vol november smoreg sequential minimal optimization smo algorithm classification tasks defined sparse data sets shown effective method training svm smoreg implements sequential minimal optimization algorithm training support vector regression model implementation globally replaces missing values transforms nominal attributes binary ones also normalizes attributes default normalized training dataset used polynomial kernel kernel used regsmo optimizing rules generate decision list regression problems used divide conquer builds model tree using makes best leaf rule iteration progress method generating rules model trees called straightforward works follows tree learner applied full training dataset pruned tree learned best branch made rule tree discarded instances covered rule removed dataset process applied recursively remaining instances terminates instances covered one rules basic strategy learning rules however instead building single rule usually done built full model tree stage make best branch rule contrast part partial decision trees employs strategy categorical prediction builds full trees instead partially explored trees building partial trees leads greater computational efficiency affect size accuracy resulting rules simulating mode set minimum number instances allow leaf node decision table decision table symbolic way express test experts design experts knowledge compact form decision table includes hierarchical table tables consider entry higher level table gets broken values pair additional attributes form another table structure similar dimensional stacking implementation used best first method find good attribute combinations decision table example final row tells classify applicant heart failure gender female age blood pressure important international journal foundations computer science technology ijfcst vol november trees algorithm generating model trees builds tree given instance predict numeric values input attributes either discrete continuous algorithm requires output attribute numeric given instance tree traversed top bottom leaf node reached node tree decision made follow particular branch based test condition attribute associated node leaf node linear regression model associated form based input attributes instance whose respective weights calculated using standard regression tree called model tree leaf nodes contain linear regression model generate predicted output start set training instances build model tree using algorithm tree built using method model tree algorithm weka available java class minimum number instances allow leaf node simulation number rules reptree algorithm fast decision tree learner also based algorithm cause classification discrete outcome regression trees continuous outcome uses information building tree using prunes using pruning sorts values numeric attributes missing values dividing splitting corresponding instances pieces implement reptree set maximum tree depth means restriction minimum proportion variance data needs presented node used splitting performed regression trees implementation tree whose size decision stump decision stump simplest special case decision tree decision stump consists single decision node two prediction leaves decision node rule checks presence absence specified command sequence boosted decision stumps created using weighted voting decision stumps usually used conjunction boosting algorithm regression classification missing treated separate value lazy ibk neighbour classifier used simplest learning algorithm gives test instance find training instance closest given test instance uses simple distance measure predicts class training instance one instance smallest distance multiple test instance first one found used international journal foundations computer science technology ijfcst vol november similarity function used similarity instances described attributes case define attributes boolean attributes difference ibi nearest neighbour algorithm ibi normalizes attributes ranges processes instances incrementally unlike nearest neighbour algorithm simple policy tolerating missing values model used linear search algorithm searching nearest neighbour implementing brute force search algorithm nearest neighbour search nearest neighbour classification lwl lazy learning methods defer training data process query needs answered algorithm assign instance weights uses good choice classification naive bayes implementation surveys form lazy learning locally weighted learning uses locally weighted training average interpolate extrapolate otherwise combine training data implementation lwl model used neighbours weighting function linear function linear search algorithm searching nearest neighbour implementing brute force search algorithm euclidean distance similarity function meta regression discretization class regression scheme utilizes distribution classifier version data class attribute discretized predicted value based predicted probabilities interval expected value mean class value discretized interval class also supports conditional density estimation building univariate density estimator target values training data weighted class probabilities implement model algorithm adopted due abilities apply negotiation strategies set number bins discretization estimator type density estimating histogram estimator created pruned tree size leaves node esults iscussion report results eleven methods used study usefulness data mining algorithms remote tracking parkinson disease progression data set used detection progression eleven data mining methods divided two subsets training test data total pieces data exclusively used training pieces data exclusively used testing accuracies obtained study dataset presented compare assimilate results algorithms correlation coefficient mean absolute error root mean squared error relative absolute error percentage root relative squared error percentage statistics mean absolute error mae quantity used measure close forecasts predictions eventual outcomes root mean square error rmse frequently used measure differences values predicted model estimator values actually observed international journal foundations computer science technology ijfcst vol november table experiential results classification models summarized correlation coefficient algorithm categories notice decision table gave higher accuracy correlation coefficient dataset rules category algorithm comparatively good accuracy equivalent figure correlation coefficient classification models relative absolute error similar relative squared error sense also relative simple predictor average actual values case though error total absolute error instead total squared error thus relative absolute error takes total absolute error normalizes dividing total absolute error simple predictor root relative squared error relative would simple predictor used specifically simple predictor average actual values thus relative squared error takes total squared error normalizes dividing total squared error simple predictor taking square root relative squared error one reduces error dimensions quantity predicted show two parameters chart international journal foundations computer science technology ijfcst vol november figure relative absolute error root relative squared error classification models onclusion uture works experimental study compares classification performance different eleven data mining algorithms via using parkinson telemonitoring dataset dataset comprises attributes various ranges values early detection kind disease important factor remote tracking updrs using voice measurements would facilitate clinical monitoring elderly people increase chances early diagnosis used data mining techniques simple linear regression slr perceptron mlp smoreg decision table reptree decision stump ibk lwl regression discretization models best approach remote parkinson telemonitoring decision table model also rules category good result correlation coefficient mathematical equations different data mining techniques calculating experimental results derived references tsanas little mcsharry ramig enhanced classical dysphonia measures sparse regression telemonitoring parkinson disease progression international conference acoustics speech signal processing lang lozano parkinson disease first two parts new england journal medicine rizk murielmp duyckaerts oertelwh caille dopamine depletion impairs precursor cell proliferation parkinson disease nat neurosci sakar kursun telediagnosis parkinson disease using measurements medical skodda schlegel speech rate rhythm parkinson disease movement disorders tsanas little mcsharry ramig accurate telemonitoring parkinson disease progression speech tests ieee transactions biomedical engineering little mcsharry hunter ramig suitability dysphonia measurements telemonitoring parkinson disease ieee transactions biomedical engineering international journal foundations computer science technology ijfcst vol november skodda rinsche schlegel progression dysprosody parkinson disease time longitudinal study movement disorders aziz peggs sambrook crossman lesion subthalamic nucleus alleviation mptp parkinsonism primate movement disorders walter data mining industry emerging trends new opportunities massachusetts institute technology kantardzic mehmed data mining concepts models methods algorithms john wiley sons jiawei micheline data mining concepts techniques vol morgan kaufmann publishers zafarani jashki baghi ghorbani novel approach social behavior analysis blogosphere berlin heidelberg bergler canadian gharehchopogh peyman mohammadi parvin hakimi application decision tree algorithm data mining healthcare operations case study international journal computer applications august mitchell machine learning data mining communications acm florin gorunescu data mining concepts models techniques intelligent systems reference library vol springer publication chang wang bernard jiang using data mining techniques multidiseases prediction modeling hypertension hyperlipidemia common risk factors expert systems applications volume issue may pages issn eskidere figen cemal comparison regression methods remote tracking parkinson disease progression expert systems applications volume issue april pages issn daniel ansari johan nilsson roland andersson sara bobby tingstedt bodil andersson artificial neural networks predict survival pancreatic cancer radical surgery american journal surgery volume issue january pages issn magne thoresen petter laake simple linear regression model correlated measurement errors journal statistical planning inference volume issue january pages issn warwick artificial intelligence basics taylor francis gharehchopogh peyman mohammadi case study parkinson disease diagnosis using artificial neural networks international journal computer applications july alex smola bernhard scholkopf tutorial support vector regression technical report series goodwin vandyne lin talbert data mining issues opportunities building nursing knowledge biomed inform braha shmilovici data mining improving cleaning process semiconductor industry ieee trans semiconductor manuf fayyad uthurusamy evolving data mining solutions insights commun acm zhou three perspectives data mining artif intell mattison data warehousing strategies technologies techniques statistical analysis spss whitepapers welland decision tables computer programming heyden son kohavi power decision tables lavrac wrobel eds machine learning proceedings eighth european conference machine learning lecture notes artificial intelligence springer verlag berlin heidelberg qinlan learning continuous classes inproceedings australian joint conference artificial intelligence polumetla learning methods detection rwis sensor malfunctions master thesis university minnesota international journal foundations computer science technology ijfcst vol november mohmad badr snousy hesham mohamed khaled badran ibrahim ali khlil suite decision classification algorithms cancer gene expression data egyptian informatics journal volume issue july pages issn yoav robert decision theoretic generalization learning application boosting journal computer system sciences aha kibler learning algorithms machine learning vapnik bottou local algorithms pattern recognition dependencies estimation neural authors peyman mohammadi student computer engineering department science research branch islamic azad university west azerbaijan iran interested research areas artificial neural networks data mining machine learning techniques
5
may atari grand challenge dataset vitaly kurin visual computing institute rwth aachen university aachen germany vitaliykurin sebastian nowozin katja hofmann machine intelligence perception group microsoft research cambridge lucas beyer bastian leibe visual computing institute rwth aachen university aachen germany beyer leibe abstract recent progress reinforcement learning fueled combination deep learning enabled impressive results learning interact complex virtual environments yet applications still scarce key limitation data efficiency current approaches requiring millions training samples promising way tackle problem augment learning human demonstrations however human demonstration data yet readily available hinders progress direction present work addresses problem follows collect describe large dataset human atari replays largest diverse data set publicly released date illustrate example use dataset analyzing relation demonstration quality imitation learning performance iii outline possible research directions opened work introduction reinforcement learning agent learns trial error perform task initially unknown environment recently research area seen dramatic progress complex interactive tasks virtual environments mnih silver schulman largely driven combinations deep learning yet despite recent progress applications still largely lacking setup agents learn solve task completely scratch causes one key limitations deep approaches data inefficiency comparison autonomous agents humans lot prior information world every day base decisions knowledge culture social relationships experience information get experience others result learning new task people effective need less actions master training agent executes lot actions human never would ineffective renders application complex potentially dangerous environments infeasible possible solution problem learn human demonstrations schaal russell abbeel monfort hester community focused building environments test beds models today approaches trained compared diverse tasks environments ale bellemare openai gym brockman microsoft project malmo johnson publicly available datasets human demonstrations tasks environments lack hampers progress research learning human demonstration examples imagenet deng computer vision switchboard godfrey speech recognition shown datasets catalyze research progress order accelerate research learning demonstration release describe illustrate use atari grand challenge dataset human atari replays contributions collect analyze release research community largest diverse dataset human atari replays date dataset comprises million frames hours game play five games order magnitude larger previous datasets illustrate one use dataset analyzing relation demonstration quality imitation learning performance iii discuss research directions opened work background section outlines key concepts notation used throughout paper operate usual reinforcement learning setup agent acts environment response state observations learns reward signal reflects abstract notion consequences actions taken setup formulated markov decision process mdp defined tuple set states set actions reward function transition function returns probability states given state action iteration interaction environment agent observes state takes action gets reward transition agent behavior characterized policy function returns action given state policy stochastic experimental analysis section based recently proposed imitation algorithm suggested hester turn based watkins dayan particular ddqn van hasselt next give overview relevant approaches watkins dayan algorithm centers learning approximate function reflects expected discounted cumulative value taking particular action state following particular policy thereafter optimal function satisfy bellman equation max discount factor trades immediate versus longer term rewards optimal policy policy takes best possible decision time step argmax dqn mnih variant uses neural network called deep approximate network returns action values actions available given current state separate target network used compute training updates well replay memory past experience minibatch sampling shown improve training stability resulted breakthrough results learning play atari games double dqn van hasselt extension dqn decouples selection value estimate actions max operator shown result accurate approximations theoretically practice learning objective looks follows jdq http recent work hester suggests approach imitation learning combines double dqn objective large margin classification loss aimed keeping learned policy close demonstrated behavior jdq regularization supervised learning loss max large margin classification loss returns positive number expert action zero otherwise large margin classification loss prevents learner previously unseen states actions taken expert forced margin higher unseen ones methods described also applied completely data collected process human interaction environment constructing atari grand challenge dataset section details approach collecting atari grand challenge dataset described tools made public data set collecting dataset collected dataset using web application built around atari emulator written javascript given initial state full sequence human inputs emulator completely deterministic avoid excessive burden saving screenshots game every single time step played instead record initial state player inputs generate dataset images offline playback makes data collection large scale feasible limited resources processing time step screenshot game action taken time step reward current score information time step terminal since incomplete episodes still carry useful information save episode time player closes application tab browser window well game ends functionality entirely processed client side within browser support major web browsers google chrome mozilla firefox microsoft edge safari server built using responsible saving data later loading data replaying data saved postgresql database replay process automated case good example gamified crowdsoursing using people desire play useful things order engage people added two progress bars one compares players performance best human player result shows comparison dqn performance taken mnih dataset two steps dataset first try eliminate differences javatari emulator ale difference found states get javatari vertically shifted several pixels comparison ale states eliminate difference shifting states ale padding zeroes bottom top borders http http https table atari grand challenge statistics episodes frames gameplay hrs worst score best score space invaders bert video pinball montezuma revenge first frames record emulated atari memory fully initialized might get excessively large score correct therefore fix first several frames rewards zero also interested games person interact application opening closing filter cases simply removing games final score zero fully automated new data processed code provided properties atari grand challenge dataset description dataset section briefly describe atari grand challenge dataset consists show properties deem particularly relevant research learning human demonstrations scale dataset consists human replays five popular atari games video pinball bert space invaders montezuma revenge choice games random want vary level difficulty according results mnih dqn able play first game significantly better human players results second third comparable human performance latter two hard dqn atari grand challenge dataset consists game episodes positive final score million frames hours playing time frames per second table shows statistics dataset fig shows sample screenshots figure sample screenshots dataset left right games shown space invaders bert pacman video pinball montezuma revenge diversity since data collected wild players good bad result fig shows atari grand challenge dataset quite diverse terms final score distribution episode final score time played already make assumptions different players level expertise else show player diversity quantitatively natural assume experienced player effective new player achieve challenging reward experienced player fig shows players equal access rewards least games question comparison advanced expert groups see expert players faster achieving rewards rightmost column data points look shifted left given final score advanced group higher achieve shorter periods time figure atari grand challenge dataset human demonstrators score dependency time extensibility currently dataset comprises five atari games easily extended adding new game publish dataset code data collection atari games available ale bellemare added within hours work support paddle games like breakout pong since almost impossible exactly repeat noisy controller collect states influence data quality imitation learning performance experiments description shown section replays good players well replays bad players section show dataset used study demonstrator expertise influence performance imitation learning experiment filter training data minimum score train model frames episodes final score threshold percentile percentile top data percentile top also train model whole dataset train model completely use regularization term hester since data train use actions suggested paper use network architecture mnih train iterations use adam kingma optimizer learning rate target network update interval size training run one million updates code written chainer tokui within framework since frameskip data collection use frameskip coefficient normalize reward values dividing raw rewards largest reward value observed data negative rewards case experimentation code found github figure reward frame reward obtained show players equal access rewards experienced players efficient game episodes divided follows novice percentile scores average advanced expert percentile results training evaluate performance model episodes every updates training take model best average results games report average score standard error mean table line hypothesis higher filter value data better performance performance imitation model dataset three five games explained looking table data lower diverse human demonstrator scores hester time video pinball data better human scores model performs better https https https table average score standard error mean games models trained subsets data filtered score first four rows use offline part imitation algorithm without regularization hester train data top means training data consists episodes final score higher equal percentile score evaluation performed policy mnih reports standard deviation scores information number episodes thus report sem space invaders bert video pinball montezuma revenge imitation data imitation top imitation top imitation top imitation hester dqn mnih ddqn van hasselt random uniform table comparison human scores hester atari grand challenge dataset hester space invaders bert pacman video pinball montezuma revenge worst score best score atari grand challenge dataset transitions worst score best score transitions related work two directions research working leveraging demonstration data training autonomous agent inverse reinforcement learning irl imitation learning former group addresses scenarios access reward function true tasks goal often underspecified sometimes hard provide reward represents useful information expert demonstration general idea approximate reward function learn policy using approximation russell abbeel whilst irl benefit atari grand challenge dataset ignoring reward information imitation learning direct benefactor dataset imitation learning exploits reward information learn function directly policy schaal uses model speed training interesting comparison influence modelfree paper notes learning benefits using demonstration data latest work learning demonstration shows also greatly benefit using human player data hester subramanian hosu rebedea datasets collected learning demonstration research described either small available public use atari grand challenge dataset largest diverse terms types games well amount types human players release far community mostly focusing building environments training autonomous agents ale bellemare openai gym brockman microsoft malmo project johnson final goal operating environments maximizing final score even train models sounds reasonable evaluate performance within environment table describe datasets coupled interactive environments https table learning demonstration datasets comparison replay data hosu rebedea published montezuma revenge private eye checkpoints saved states environment used continuing episode atari grand challenge udacity hosu rebedea hester domain tasks size transitions atari driving simulator atari atari mil cameras mil mil open diverse player expertise atari games recently begun take similar role experimentation ground research mnist taken computer vision many implementations algorithms evaluated games therefore much easier compare leveraging human behavior data pure implementations even combining discussion future work work opens wide range work benefits uses human demonstrations effectively efficiently learning interact complex environments extending atari grand challenge dataset atari grand challenge website still people keep playing plan update dataset future data becomes available important development dataset collect data professional players achieve higher scores shown data quality affects final performance dramatically good improvement exploiting atari grand challenge dataset video games perfect testing ground evaluating hypotheses learning use human data achieve higher sample efficiency make training process faster future research focus improving sample efficiency algorithms leveraging data diverse quality paper shown one possible dataset applications data quality influences final performance imitation learning hester hope researchers machine learning game maybe even cognitive science find something useful research purposes find following applications particularly appealing recently inverse reinforcement learning imitation learning regained popularity ermon baram hester dataset direct impact kind research interesting check take something useful bad players data even experienced players make mistakes throwing data waste potentially important information shiarlis investigates topic inverse reinforcement learning domain might interesting see similar approach atari learning demonstration domain frameskip coefficient shown important braylan sharma lakshminarayanan would interesting investigate frameskip using human data extract atari grand challenge dataset curriculum learning proven useful leibfried data players different expertise investigate curriculum learning respect also seen attempts investigate humans learn play atari tsividis dataset might interesting kind research https conclusion release dataset human atari replays five games million frames hours game play time describing main properties scale diversity show order achieve high performance important collect data players high level expertise collect lot data plan update dataset future adding professional atari players data release code data collection well gives opportunity everybody extend dataset also show possible research directions atari grand challenge dataset could used hope release catalyze research learning human demonstration acknowledgments vitaly kurin would like thank microsoft research cambridge hosting project microsoft azure research grant authors would also like thank paulo peccin javatari creator emulator useful discussions references abbeel apprenticeship learning via inverse reinforcement learning proceedings international conference machine learning page acm baram anschel mannor adversarial imitation learning arxiv preprint bellemare naddaf veness bowling arcade learning environment evaluation platform general agents journal artificial intelligence research jun braylan hollenbeck meyerson miikkulainen frame skip powerful parameter learning play atari workshops aaai conference artificial intelligence brockman cheung pettersson schneider schulman tang zaremba openai gym deng dong socher imagenet hierarchical image database ieee conference computer vision pattern recognition pages ieee godfrey holliman mcdaniel switchboard telephone speech corpus research development acoustics speech signal processing ieee international conference volume pages ieee hester vecerik pietquin lanctot schaul piot sendonaris osband agapiou leibo gruslys learning demonstrations real world reinforcement learning arxiv preprint ermon generative adversarial imitation learning neural information processing systems pages hosu rebedea playing atari games deep reinforcement learning human checkpoint replay arxiv preprint johnson hofmann hutton bignell malmo platform artificial intelligence experimentation international joint conference artificial intelligence ijcai page kingma adam method stochastic optimization arxiv preprint lakshminarayanan sharma ravindran dynamic frame skip deep network corr url http leibfried kushman hofmann deep learning approach joint video frame reward prediction atari games arxiv preprint mnih kavukcuoglu silver rusu veness bellemare graves riedmiller fidjeland ostrovski petersen beattie sadik antonoglou king kumaran wierstra legg hassabis control deep reinforcement learning nature mnih badia mirza graves lillicrap harley silver kavukcuoglu asynchronous methods deep reinforcement learning corr url http monfort johnson oliva hofmann asynchronous data aggregation training end end visual control networks international conference autonomous agents multiagent systems pages international foundation autonomous agents multiagent systems russell algorithms inverse reinforcement learning international conference machine learning pages schaal learning demonstration neural information processing systems pages schulman levine moritz jordan abbeel trust region policy optimization corr url http sharma lakshminarayanan ravindran learning repeat fine grained action repetition deep reinforcement learning corr url http shiarlis messias whiteson inverse reinforcement learning failure international conference autonomous agents multiagent systems pages international foundation autonomous agents multiagent systems silver huang maddison guez sifre van den driessche schrittwieser antonoglou panneershelvam lanctot dieleman grewe nham kalchbrenner sutskever lillicrap leach kavukcuoglu graepel hassabis mastering game deep neural networks tree search nature subramanian isbell thomaz exploration demonstration interactive reinforcement learning international conference autonomous agents multiagent systems pages international foundation autonomous agents multiagent systems tokui oono hido clayton chainer open source framework deep learning proceedings workshop machine learning systems learningsys annual conference neural information processing systems url http tsividis pouncy tenenbaum gershman human learning atari aaai spring symposium science intelligence computational principles natural artificial intelligence van hasselt guez silver deep reinforcement learning double association advancement artificial intelligence pages watkins dayan machine learning
2
sparse random graphs given degree distribution pim van der gabor dmitri northeastern university department physics northeastern university department mathematics northeastern university departments electrical computer engineering oct october abstract even though degree distributions ubiquitously observed great variety large real networks mathematically satisfactory treatment random graphs satisfying basic statistical requirements realism still lacking requirements sparsity exchangeability projectivity unbiasedness last requirement states entropy graph ensemble must maximized degree distribution constraints prove hypersoft configuration model hscm belonging class random graphs latent hyperparameters also known inhomogeneous random graphs graphs ensemble random graphs sparse unbiased either exchangeable projective proof unbiasedness relies generalized graphons mapping problem maximization normalized gibbs entropy random graph ensemble graphon entropy maximization problem showing two entropies converge limit keywords sparse random graphs degree distributions graphs pacs msc contents introduction hypersoft configuration model hscm properties hscm unbiasedness main results exchangeability projectivity remarks paper organization requirement background information definitions graph ensembles entropy graphs given degree sequence graphs given expected degree sequence scm sparse graphs given degree distribution graphs hypersoft constraints graph ensembles bernoulli graphon entropies dense graphs given degree distribution rescaled graphon entropy sparse graphs sparse graphs given degree distribution sparse hypersoft configuration model sparse hscm results main result limit degree distribution hscm limit expected average degree hscm hscm maximizes graphon entropy graphon entropy scaling convergence gibbs entropy scaling convergence proofs classical limit approximation graphon proofs node degrees hscm technical results poisson couplings concentrations proof theorem proof theorem proofs graphon entropy proof proposition proof theorem proof theorem averaging partition constructing partition introduction random graphs used extensively model variety real networks many networks ranging internet social networks brain universe broad degree distributions often following closely power laws simplest random graph model random graphs poisson degree distributions reproduce resolve disconnect several alternative models proposed studied first one configuration model random graphs given degree sequence model microcanonical ensemble random graphs every graph ensemble fixed degree sequence one observed snapshot real network every graph equiprobable ensemble ensemble thus maximizes gibbs entropy subject constraint degree sequence fixed yet given real network snapshot one usually trust degree sequence ultimate truth variety reasons including measurement imperfections inaccuracies incompleteness noise stochasticity importantly fact real networks dynamic short long time scales growing often orders magnitude years factors partly motivated development soft configuration model scm random graphs given expected degree sequence first considered later corrected shown correction yields canonical ensemble random graphs maximize gibbs entropy constraint expected degree sequence fixed statistics canonical ensembles random graphs known exponential random graphs ergs shown sparse scm equivalent equivalent case dense graphs yet scm still treats given degree sequence fixed constraint albeit sharp soft constraint constraint stark contrast reality many growing real networks degree nodes constantly change yet shape degree distribution average degree change staying essentially constant networks grow size even orders magnitude observations motivated development hypersoft configuration model hypersoft configuration model hscm hscm neither degrees even expected values fixed instead fixed properties degree distribution average degree hscm given average degree degree distribution defined exponential measure real line constant graphon measure establishes probability measures intervals log another constant constants two parameters model hscm random graphs size defined via sampling points according measure connecting pairs points sampled locations edge probability alternative equivalent definition obtained mapping definition random variables uniformly distributed unit interval vertices connected probability yet another equivalent definition perhaps familiar frequently used one given interval measure pareto distribution pareto representation expected degree vertex coordinate proportional compared scm edges random variables expected degrees fixed hscm introduces another source hence expected degrees also random variables one obtains particular realization scm hscm sampling fixed distribution freezing therefore hscm probabilistic mixture canonical ensembles scm ergs one may call hscm hypercanonical ensemble given latent variables hscm called hyperparameters statistics properties hscm prove theorem distribution degrees hscm ensemble defined convergence always means limit unless mentioned upper incomplete gamma function regularized lower incomplete gamma function since get pim van der hoorn gabor lippner dmitri krioukov theory hscm hscm hscm hscm hscm internet theory hscm fig degree distribution hscm theory simulations internet theory curve figure panel degree distribution theory simulations left andhscm simulation shown symbols averaged random graphs graph size graphs generated according hscm internet theory curve left panel average degrees averaged random graphs graphs size simulation data shown symbols averaged random graphs graph respectively internet data comes caida archipelago measurements internet autonomous system level thethe number nodes size alltopology graphs generated according hscm withandthe internet graph right panel shows theoretical degree distribution curve average degrees averaged overhscm graphs random graphs graphs size versus simulations random different sizes respectively internet data comes caida average degrees graphs size respectively archipelago measurements internet topology autonomous system level number nodes average degree internet graph right limit constant case hscm equivalently defined panel shows theoretical degree distribution curve growing model labeled graphs converging soft preferential attachment versus simulations random hscm graphs different sizes location node belongs increment average degrees graphs size respectively sampled restricted increment measure node connected existing nodes probability given also theinexact equivalence forthe original equilibrium hscm definitionto prove theorem expected average degree ensemble converges ordered growing definition slightly care ensuring joint distribution exactly sameein cases must taken using basics properties poisson point processes specifically equilibrium formulation given number nodes fixed must behas sampled poisson distribution degree distribution ensemble power tailthe exponent mean alternatively given fixed right boundary interval fixed expected average degree fixed constant depend fig must random variable log random variable sampled gamma distribution shape rate node placed requirement unbiasedness coordinates ofand rest nodes sampled defined random labeled increasing order growing graph formulation coordinate thethe average degree hscm converges distribution node determined constant vdegree converges power law hscm certainly one infinite number random variable sampled exponential distribution rate theof possess two properties onepoisson example random graphs three models options equivalent realizations point processhyperbolic measure constanttoaverage degree larger also rate converging binomial sampling bothdegree distribution fixed thehave projective map projectivity definition mapslimit graphsisgthe subgraphs induced numbers triangles clustering unbiased model hscm nodes note even though growing version model exchangeable random graphs constant average degree hscm random since itcharacterized relies labeling nodes theproperties increasingand order coordinates nevertheless graphs two others colloquially hscm equivalent equilibrium hscm ordered labeling joint distribution model maximally random graphs constant average degree node coordinates linking probability function coordinates question formally answered checking whether hscm satisfies maximumthe two formulations observation suggests might exist less trivial projective entropy requirement map model projective exchangeable hscm discrete distribution istosaid satisfy requirement even ordered labeling soft preferential attachment equivalent pisi subject constraints dis appearance fofr sedges realexisting functions states fir frate adjusted version iapcertain vertices note hscm limit random hyperbolic graphs case corresponds uniform density points hyperbolic space spare collection real numbers entropy distribution log maximized subject constraints distribution known always unique belonging exponential family distributions derived basic consistency axioms uniqueness invariance respect change coordinates system independence subset independence since entropy unique measure information satisfying basic requirements continuity monotonicity independence requirement formalizes notion encoding probability distribution describing stochastic system available information system given form constraints encoding information given since distribution unique distribution necessarily possibly implicitly introduces biases encoding additional information constraining system properties concerning given information values clearly uncontrolled information injection model system may affect predictions one may wish make system using model indeed known given available information system predictive power model describes system maximized model perhaps best illustration predictive power predictive power equilibrium statistical mechanics formulated almost fully terms principle illustrate requirement application random graphs suppose define random graph ensemble available information random graphs must nodes edges purely probabilistic perspective random graph ensemble satisfying instance equally good one yet one unique ensemble satisfies constraints also requirement ensemble graph nodes edges equally likely probability distribution set graphs nodes edges uninform without constraints uniform distribution number distribution state space case graphs random satisfying constraints inject model case explicitly additional information graph structure given clearly predictions based random versus may different first model trivially predicts occur probability appear nearly zero probability large slightly less trivial example scm case given information expected degrees nodes must state space graphs nodes shown unique ensemble satisfying constraints given random graphs nodes connected probabilities pij unique solution system equations pij popular model different connection probability pcl min thought approximation pij ensemble also satisfies desired constraints pcl albeit sequences satisfy requirement injects case implicitly additional information ensemble constraining undesired properties graphs ensemble values since undesired information injection implicit case may quite difficult detect quantify biases introduced ensemble main results main result paper proof theorem hscm unbiased hscm random graphs maximize gibbs entropy random graphs whose degree distribution average degree converge first difficulty face proving result properly formulate problem constraints indeed show probability distributions hscm defines set graphs maximizes graph entropy log across distributions define random graph ensembles degree distributions average degrees converging constraints quite different scm constraints example fixed fixed set constraints sufficient statistics instead introducing sufficient statistics expected degrees converging desired pareto distribution proceeding show section problem graph entropy maximization constraints equivalent graphon entropy maximization problem problem finding graphon maximizes graphon entropy log log entropy bernoulli random variable success probability across graphons satisfy constraint expected degree node coordinate hscm prove proposition unique solution graphon entropy maximization problem given graphon fact graphon unique solution graphon entropy maximization problem reflection basic fact statistical physics grand canonical ensemble fermi particles edges energy case unique ensemble fixed expected values energy number particles probability find particle state energy given yet solutions graph graphon entropy maximization problems yield equivan lent random graph ensembles rescaled graph entropy converges graphon entropy face another difficulty since ensembles sparse converge zero actually prove two entropies converge faster either converges zero end prove theorems graphon graph entropies converges zero log key result also theorem proof divided scaling factor log difference graphon graph entropies vanishes limit log meaning two entropies indeed converge faster zero combination graphon entropy maximizer convergence rescaled graph entropy entropy graphon implies main result theorem hscm graph entropy maximizer subject degree distribution average degree constraints exchangeability projectivity addition natural dictated network data requirements constant independent graphs size average degree degree distribution well requirement dictated basic statistical considerations reasonable model real networks must also satisfy two requirements exchangeability projectivity exchangeability takes care fact node labels random graph models usually meaningless even though node labels real networks often meaning autonomous system numbers internet node labels random graph models usually random integer indices random graph model exchangeable permutation node indices probabilities two graphs given adjacency matrices random graph model projective exists map graphs size graphs size probability graphs model satisfies condition satisfied easy see model admits dual formulation equilibrium model graphs fixed size growing graph model requirement satisfied soon one node added graph due growth real network graph represents resulting bigger graph effectively sampled different distribution corresponding model different parameters necessarily affecting structure existing subgraphs clearly unrealistic scenario simplest examples projective map simply selects subset nodes consisting nodes constant first case one realize growing graphs adding nodes one time connecting new node existing nodes probability second case growth impossible since existing edges growing graphs must removed probability resulting graphs samples hscm random graphs manifestly exchangeable ensemble note fact graphs sparse exchangeable means conflict theorem states limit graphon mapped unit square exchangeable sparse graph family necessarily zero indeed mapped unit square limit hscm graphon zero well also note convergence zero mean ensemble converges infinite empty graphs fact expected degree distribution average degree ensemble converge limit stated hscm ensemble also projective specific labeling nodes breaking exchangeability seen observing density points intervals consequently whole real line limit constant case hscm equivalently defined model growing labeled graphs follows location new node belongs increment sampled restricted increment probability measure location sampled new node connects existing nodes probability given growing model equivalent original equilibrium hscm definition section asymptotically however exact equivalence equilibrium hscm ordered growing counterpart also achieved ensuring joint distribution exactly cases using basic properties poisson point processes specifically equilibrium definition section must adjusted making right boundary interval fixed function random variable log random variable sampled gamma distribution shape rate node placed random coordinate coordinates rest nodes sampled probability measure restricted random interval labeled increasing order coordinates growing model definition must also adjusted coordinate node determined random variable sampled exponential distribution rate one show coordinates finite infinite equilibrium growing hscm models defined way equivalent realizations poisson point process measure rate converging binomial sampling fixed projective map projectivity definition simply maps graphs subgraphs induced nodes note even though growing hscm exchangeable since relies labeling nodes increasing order coordinates nevertheless equivalent equilibrium hscm ordered labeling joint distribution node coordinates linking probability function coordinates equilibrium growing hscm definitions observation suggests might exist less trivial projective map hscm projective exchangeable time remarks note thanks projectiveness hscm shown equivalent soft version preferential attachment model growing graphs new nodes connect existing nodes probabilities proportional expected degrees existing nodes similar hscm degree distribution average degree graphs grown according preferential attachment essentially change either graphs grow equivalence hscm soft preferential attachment exact hscm even ordered labeling equivalent soft preferential attachment equivalent adjusted version certain rate dis appearance edges existing vertices also note hscm limit random hyperbolic graphs case corresponds uniform density points hyperbolic space radial coordinates nodes spherical coordinate system hyperboloid model coordinates certainly negative expected fraction nodes negative coordinates hscm negligible limit angular coordinates nodes ignored hyperbolic graphon becomes equivalent final introductory remark note among rigorous approaches sparse exchangeable graphs hscm definition perhaps closest graphon processes graphexes particular focus graph convergence limits two ensembles considered one ensemble also appearing defined graphon measure replaced measure space graphs certain expected size growing function time defined sampling points poisson point process intensity connecting pairs points probability given finally removing isolated vertices ensemble even similar hscm still defined location vertex sampled intervals growing whose infinite union covers whole latter ensemble exchangeable ensembles shown converge properly stretched graphons defined yet expected average degree grows infinity limit hscm definition particular vertices graphs sampled exchangeability allowing explicit control degree distribution average degree constant making problem graph convergence difficult discuss graph convergence leaving well generalization results arbitrary degree distributions future publications paper organization next section first review detail necessary background information provide required definitions section formally state results paper section contains proofs results background information definitions graph ensembles entropy graph ensemble set graphs probability measure gibbs entropy ensemble log note entropy random variable respect probability measure graph size sampled according measure write instead given set constraints form graph properties fixed given values ensemble given maximizes across measures satisfy constraints constraints either sharp microcanonical soft canonical satisfied either exactly average respectively simplest example constrained graph property number edges fixed graphs size corresponding microcanonical canonical bles respectively respectively uniform exponential boltzmann distribution hamiltonian number edges graph lagrange multiplier given constraints given degrees nodes instead number edges following characterization microcanonical canonical ensemble graphs given degree sequence given degree sequence microcanonical ensemble graphs degree sequence configuration model uniform set graphs degree sequence graphs given expected degree sequence scm sharp constraints relaxed soft constraints result canonical ensemble soft configuration model scm given expected degree sequence contrast graphical sequence integers sequence real numbers scm defined connecting nodes probabilities pij lagrange multipliers solution pij boltzmann distribution hamiltonian degree node graph sparse graphs given degree distribution let probability density function finite mean denote corresponding random variable consider sequence graph ensembles maximize gibbs entropy constraint lim degree uniformly chosen node ensemble graphs size words ensemble graphs whose degree distribution converges addition degree distribution constraint also want graphs sparse common definition sparseness seems number edges expected average degree unbounded contrast use term sparse mean expected degree converges expected value lim constraint implies number edges note general follow since convergence distribution necessarily imply convergence expectation also note constraints neither sharp soft since deal limits degree distribution expected degree call constraints hypersoft since see random graphs satisfying constraints realized random graphs lagrange multipliers parameters hyperparameters statistics terminology graphs hypersoft constraints similar case random graphs given expected degree sequence determine distribution satisfies maximizes gibbs entropy however task poses question means maximize entropy limit constraints particular unlike ensemble graphs given expected degree sequence longer dealing set graphs fixed size sequence graphs varying sizes answer question give proper definition entropy maximization hypersoft constraints consider ensembles graphs graph ensembles simplest case graphon symmetric integrable function graphons precisely graphon equivalence classes consisting functions measure preserving transformations limits dense graph families one think interval continuum limit node indices limit graphs adjacency matrices equivalently probability exists edge nodes graphons application graphs class earlier results exchangeable arrays statistics better known connection probability random graphs latent parameters sociology network science also known graph theory inhomogeneous random graphs use term graphon refer symmetric function let probability measure graphon standard graphonbased ensemble known graphs ensemble random graphs defined first sampling node coordinates according connecting every node pair independently probability able satisfy hypersoft constraints generalize ensemble follows let measure graphon lets infinite sequence growing subsets define graphon ensemble random graphs defined graphon measures probability measures associated sample graph ensemble points first sampled according measure pairs nodes connected edge probability remark uniform measure classical settings graphs case arbitrary measure random graph ensemble similar model considered recently section growing graph construction considered created sampling coordinate node according measure connecting existing nodes independently probability main difference sampling one former case coordinates different nodes sampled different measures thus breaking exchangeability latter case sampled measure bernoulli graphon entropies given coordinates edges random graphs independent bernoulli random variables albeit different success probabilities conditional bernoulli entropy random graphs fixed coordinates thus bernoulli entropy log log graphon entropy respect defined write support confusion arises addition graph graphon ensemble write since two discrete random variables expectation conditional entropy given lower bound entropy graphon entropy definition implies graphon entropy thus lower bound rescaled gibbs entropy defined give definition ensemble sparse graphs instructive consider case dense graphs dense graphs given degree distribution consider sequence degree sequences exist constants let sequence microcanonical ensembles random graphs cms defined exists function lim degree random node random graph equivalently uniformly sampled uniform random variable proven limit sequence given graphon image functions continuous strictly increasing almost everywhere important observations order first note similar exception implies degrees nodes graphs dense particular graphs second consider problem maximizing graphon entropy constraint given show proposition solution problem given defined hence graphon obtained maximizes graphon entropy constraint imposed limit sequence rescaled degree sequences third theorem states dense graph ensembles lim meaning rescaled gibbs entropy graphs converges graphon entropy given result suggests consider family graph ensembles defined graphon given call dense hypersoft configuration model hscm distribution rescaled degrees hscm graphs converges limit graphs also since limit dense graphs limit hscm ensembles limit ensembles converging graphon since two ensembles graphon limit rescaled gibbs entropies converge value equal thanks graphon entropy even though finite two ensembles quite different fourth replace sequence degree sequences sequence expected degree sequences converging upon rescaling replace sequence cms corresponding sequence scms limit scm sequence graphon rescaled scm gibbs entropy also converges graphon entropy squeezed finite entropy hscm entropy words dense scm hscm equivalent limit versus sparse case equivalence broken since graphon zero limit key point however rescaled degree distribution dense hscm converges limit satisfies hypersoft constraints hscm gibbs entropy converges graphon entropy therefore define maximumentropy ensemble given hypersoft constraints ensemble satisfies constraints degree distribution converging limit maximizes graphon entropy constraints given rescaled gibbs entropy converging graphon entropy dense hscm ensemble trivially ensemble addition ensemble unique hypersoft ensemble dense case observations instruct extend definition hypersoft dense graphs sparse graphs naturally replace dense hypersoft constraints sparse hypersoft constraints however things become immediately less trivial case particular face difficulty since limit graphon sparse exchangeable graph ensemble zero according theorem entropy graphon zero well since entropy zero necessarily imply rescaled gibbs entropy converges graphon entropy address difficulty next rescaled graphon entropy sparse graphs consider generalized graphon ensemble random graphs defined section graphon entropy defined everywhere finite positive ensemble sparse address problem rescale graphon entropy upon rescaling converges positive constant let sequence lim rescaling affect graphon entropy maximization problem maximizing every functional given constraint equivalent maximizing constraint upon rescaling see rescaled gibbs entropy converges graphon entropy generalizing lim case converges condition implies rescaled gibbs entropy converges graphon entropy faster either converge zero sparse graphs given degree distribution graphon rescaling previous section define graphon ensemble ensemble sparse hypersoft constraints degree distribution expected degree converge graphon entropy maximized every constraint imposed holds given main result theorem sparse hypersoft configuration model defined next model hypersoft constraints sparse hypersoft configuration model sparse hscm sparse hscm defined graphon ensemble section log dense hscm recovered definition setting constant case log results section formally state results provide brief overviews proofs appearing subsequent sections main result theorem stating hscm defined section model hypersoft degree distribution constraints according definition section result follows theorems proposition theorems establish limits degree distribution expected average degree hscm proposition states hscm graphon uniquely maximizes graphon entropy constraints imposed degree distribution theorem establishes proper graphon rescaling limit rescaled graphon finally critical involved theorem proves rescaled gibbs entropy hscm converges rescaled graphon entropy main result let pareto random variable shape scale probability density function otherwise let discrete random variable probability density function mixed poisson distribution mixing parameter follows since power law exponent tail distribution also power law exponent particular given therefore degree random node random graph ensemble graphs ensemble sparse degree distribution main result theorem hscm maximum entropy ensemble random graphs hypersoft constraints defined limit degree distribution hscm degree random node random hscm graph size conditioned node coordinates sum independent bernoulli random variables success probabilities distribution ofpthis sum approximated mixed poisson distribution mixing parameter therefore first integrating distribution approximately mixed poisson distribution random variable density mixing parameter expected degree node coordinate given limit distribution expected degree converges pareto distribution shape scale prove use observation mass measure concentrated towards right end interval large therefore contributions coming negative negligible also approximate graphon classical limit approximation addition expected degree function approximated defined otherwise expected degree node coordinate approximated see converges pareto random variable note since density follows log random variable therefore following result full proof found section theorem hscm satisfies let degree uniformly chosen vertex hscm graphs size lim given limit expected average degree hscm expected degree random node hscm graphs fixed independent random variables distribution approximating using contributions shown also vanish limit section following theorem proved theorem hscm satisfies let degree uniformly chosen vertex hscm graphs size lim hscm maximizes graphon entropy let interval measure suppose function given consider graphon entropy maximization problem constraint problem find symmetric function maximizes graphon entropy satisfies constraint fixed note problem continuous version gibbs entropy maximization problem scm ensemble section following proposition prove section states solution problem continuous version scm solution proposition hscm maximizes graphon entropy suppose exists solution graphon entropy maximization problem defined solution following form function uniquely defined proposition proves hscm graphon maximizes graphon entropy constraint hscm chosen always possible soon invertible section interval measure hscm mapped respectively case log leading words node coordinates original hscm definition section coordinates equivalent definition related graphon entropy scaling convergence derive rate convergence hscm graphon entropy zero suffices consider bernoulli entropy classical limit approximation log second since mass concentrated near term negligible integrating first term get log log obtain proper scaling log details behind proof following theorem section theorem graphon entropy convergence let graphon entropy hscm ensemble log log theorem implies goes zero log log goes log gibbs entropy scaling convergence last part theorem prove rescaled gibbs entropy hscm converges graphon entropy faster latter converges zero graphon entropy trivial lower bound rescaled gibbs entropy section problem find appropriate upper bound latter converging graphon entropy identify upper bound rely argument similar specifically first partition intervals induce partition rectangles ist approximate graphon average value rectangle approximation brings error term rectangle show gibbs entropy entropy averaged graphon plus sum entropies indicator random variables take value coordinate node happens fall within interval smaller number intervals smaller total entropy random variables larger sum error terms coming graphon averaging rectangles ist large smaller smaller total error term larger total entropy crux proof find sweet spot right number intervals guaranteeing proper balance two types contributions upper bound want tighter rate convergence graphon entropy zero program executed section prove following theorem theorem gibbs entropy convergence let graphon entropy rescaled gibbs entropy hscm ensemble lim log lim log remark theorem implies log leading term gibbs entropy obtained also instructive compare scaling gibbs entropy scaling dense ensembles finally worth mentioning even though use graphon define graphs convergence results could obtained graphs defined graphon lim density fact establish required limits use classic instead therefore exists vast equivalence cal limit approximation graphon class graphs defined graphons limit degree distribution average degree whose rescaled gibbs entropy converges graphon entropy however follows proposition among ensembles graph ensemble defined graphon uniquely maximizes graphon entropy definition ensembles hypersoft constraints necessary condition graph entropy maximization proofs section provide proofs results stated previous section section begin preliminary results accuracy approximation section graphon classical limit approximation also establish results showing main contribution integration respect positive part interval defined particular show contributions coming negative part interval means results negative part support measure negligible proceed proving theorems section proofs proposition theorem found section finally convergence rescaled gibbs entropy graphon entropy theorem given section classical limit approximation graphon use approximation graphon compute necessary limits precise define min converge zero tends show differences integrals could also worked integral expressions infinity note instead involving might led better bounds however integrals tend evaluate much easier combinations hypergeometric functions integrals evaluate sufficient purposes need consider separately intervals definition since graphons symmetric functions leads following three different cases iii case note obtain following result shows integration case iii matters lemma result holds replace proof first note show together implies first result result follows noting split integral follows first integral compute log finally second integral evaluates result follows since show similar result lemma log moreover result holds replace split interval three parts proof first prove result show integrals ranges bounded term scales since log follows need consider integration hence using symmetry first compute log observe log large enough let holds sufficiently large split integration follows first integral second note hence log second integral obtain log first integral log log therefore conclude log log yields result first compute log log log log noting log large comparing upper bound enough result follows computation done two lemmas establish two important results approximations first shows independent distribution converges expectation faster proposition let independent density proof since follows lemma enough consider integral note hence obtain log since terms result follows converges expectation faster next show also proposition let independent density log log log proof similar previous proof use lemma show enough consider integral define fix note theorem exists next compute log log split integral bound follows log log first integral log second log log log comparing scaling ones lemma see former dominating finishes proof proofs node degrees hscm section give proofs theorem theorem denote degree node recall degree node sampled uniformly random since node labels interchangeable without loss generality consider theorem use show independent distribution final result follow proposition pnthe proof theorem involved given coordinates follow strategy theorem construct coupling mixed poisson random variable mixing parameter given distribution general coupling two random variables consists pair new joint probability distribution random variables marginal probabilities respectively advantage coupling tune joint distribution fit needs proof construct coupling pbn pbn lim pbn distribution respectively hence since pbn follows finally show lim pareto random variable shape scale probability density given implies mixed poisson random variable mixing parameter converges mixed poisson mixing parameter proves result give proofs two theorems first establish technical results needed construct coupling required theorem proof given section technical results poisson couplings concentrations first establish general result couplings mixed poisson random variables mixing parameters converge expectation lemma let random variables lim mixed poisson random variables respectively parameters particular lim proof let define event lim since lim enough show lim take mixed poisson parameter addition let mixed poisson parameter min respectively since get using markov inequality since assumption finishes proof converges expectation next show distribution also establish upper bound rate convergence showing converges faster lemma let independent random variables density log proof recall otherwise hence follows first integral lemma deal second integral first compute log therefore log log proceed compute last integral show note since log log log first integral compute log similar calculations yield log hence use last upper bound together obtain result follows proof theorem pbn start constructing coupling pbn lim first let indicator edge present let denote coordinates nodes conditioned pnij independent bernoulli random variables probability let mixed poisson parameter see instance theorem exists coupling therefore get log next since independent use proposition together lemma obtain follows lim let pbn mixed poisson random variable mixing parameter lemma lim pbn follows pbn pbn result pbn lim lim pbn lim pbn lim prove density lim sequence mixed poisson random variables mixing parameters converges mixed poisson random variable mixing parameter converge distribution see instance therefore since mixed poisson mixing parameter implies lim combined yields lim establish first define next hence log moreover else log log fix large enough holds case holds trivially since probabilities hence assume without loss generality large enough follows converges zero hence proves proof theorem first using lemma compute last line follows since next recall therefore using lemma log yields result proofs graphon entropy derive properties graphon entropy graphon given first give proof proposition theorem proof proposition recall given measure interval function consider problem maximizing constraint particular need show solution exists given therefore suppose exists least one graphon satisfies constraint use technique lagrange multipliers variation calculus set framework let denote space symmetric functions satisfy observe convex subset banach space symmetric functions function element respect measure also banach denote latter space slightly abuse notation write functional define functional need solve following equation lagrange multiplier functional respect derivative riesz representation theorem functional exists uniquely defined hence equation becomes addition since symmetric hence absorbing factor rewrite equation two derivatives log log log hence need solve equation log gives however small technicality related computation derivative caused fact defined particular could subset hence well defined compute derivative need well defined small enough end fix define stretched interval similarly define using consider corresponding graphon entropy maximization problem compute taking using chain rule obtain log follows log therefore following equation log leads solution form since follows using elementary algebra conclude moreover converges since obtain graphon maximizes entropy function determined constraint uniqueness suppose exist two solutions graphon entropy maximized let since satisfies follows due linearity derivative since follows almost everywhere hence almost everywhere proof theorem first note proposition implies difference expectation converges zero faster log purpose theorem hence left show rescaled entropy approximate log converges lemma integration regimes except goes zero faster log therefore need consider integration main idea rest proof range let first compute integral right hand side equation log implies lim log next show log log together gives lim log compute log log note log hence follows log theorem log log log conclude log hence using log lim proof theorem section first formalize strategy behind proof theorem briefly discussed section strategy relies partitioning interval subintervals construct specific partition satisfying certain requirements finish proof theorem averaging partition follow strategy first recall graph generated hscm hence denotes normalized gibbs entropy therefore key ingredient find matching upper bound partition range probability measure intervals approximate average box lie precise let increasing sequence positive integers consider partition given define let random variable density function vertex value equal indicates vertex happens lie within interval denoting vector random variable entropy average square ijn ijn ijn ijn measure box belongs approximates specifically first step proof investigate well scales depending want understand difference specific partition note partition intervals log log upper bound achieved partition uniform according measure since log log enough find partition log log lim log proves theorem since lim lim log log log log lim lim log log log constructing partition take partition remaining interval log equal parts end let sufficiently large take dlog define partition note log log log log left prove addition log log hence follows order establish holds replace need consider integral square need show lim log based mean value theorem states compare log log due symmetry get min min note min max addition imn imn log thus exp mean value theorem therefore min min symmetry obtain similar upper bound instead hence conclude min get min min min next partition log log log log log addition implies thus pair therefore using get log finally integrating equation whole square obtain log proves since lim lim log log thanks theorem acknowledgments work supported aro grant nsf grant references boccaletti latora moreno chavez hwanga complex networks structure dynamics phys rep newman networks introduction oxford university press oxford network science cambridge university press cambridge ray solomonoff anatol rapoport connectivity random nets math biophys gilbert random graphs ann math stat random graphs publ math edward bender rodney canfield asymptotic number labeled graphs given degree sequences comb theory ser molloy reed critical point random graphs given degree sequence random struct algor amogh dhamdhere constantine dovrolis twelve years evolution internet ecosystem trans netw newman clustering preferential attachment growing networks phys rev fan chung linyuan connected components random graphs given expected degree sequences ann comb fan chung linyuan average distances random graphs given expected degrees proc natl acad sci usa park newman statistical mechanics networks phys rev ginestra bianconi entropy randomized network ensembles epl diego garlaschelli maria loffredo maximum likelihood extracting unbiased information complex networks phys rev tiziano squartini diego garlaschelli analytical method detect patterns real networks new phys paul holland samuel leinhardt exponential family probability distributions directed graphs stat assoc kartik anand ginestra bianconi entropy measures networks toward information theory complex topologies phys rev doi tiziano squartini joey mol frank den hollander diego garlaschelli breaking ensemble equivalence networks phys rev lett sourav chatterjee persi diaconis allan sly random graphs given degree sequence ann appl probab guido caldarelli andrea capocci los rios miguel angel networks varying vertex intrinsic fitness phys rev lett romualdo class correlated random networks hidden variables phys rev kartik anand dmitri krioukov ginestra bianconi entropy distribution condensation random networks given degree distribution phys rev konstantin zuev fragkiskos papadopoulos dmitri krioukov hamiltonian dynamics preferential attachment phys math theor doi michael evans jeffrey rosenthal probability statistics science uncertainty freeman new york kimberly claffy young hyun ken keys marina fomenkov dmitri krioukov internet mapping art science cybersecurity appl technol conf homel secur dmitri krioukov fragkiskos papadopoulos maksim kitsak amin vahdat hyperbolic geometry complex networks phys rev jaynes information theory statistical mechanics phys rev shore johnson axiomatic derivation principle maximum entropy principle minimum ieee trans inf theory tikochinsky tishby levine consistent inference probabilities reproducible experiments phys rev lett john skilling axioms maximum entropy bayesian methods science engineering pages springer netherlands dordrecht claude edmund shannon mathematical theory communication bell syst tech dmitri krioukov clustering implies geometry networks phys rev lett jagat narain kapur models science engineering wiley new delhi david aldous representations partially exchangeable arrays random variables multivar anal persi diaconis svante janson graph limits exhcangeable random graphs rend matemtica olav kallenberg foundations modern probability springer new york cosma rohilla shalizi alessandro rinaldo consistency sampling exponential random graph models ann stat dmitri krioukov massimo ostilli duality equilibrium growing networks phys rev szegedy limits dense graph sequences comb theory ser svante janson graphons cut norm distance couplings rearrangements nyjm monogr douglas hoover relations probability spaces arrays random variables technical report institute adanced study princeton dorogovtsev mendes samukhin structure growing networks preferential linking phys rev lett krapivsky redner leyvraz connectivity growing random networks phys rev lett rodrigo aldecoa chiara orsini dmitri krioukov hyperbolic graph generator comput phys commun caron emily fox sparse graphs using exchangeable random measures victor veitch daniel roy class random graphs arising exchangeable random measures christian borgs jennifer chayes henry cohn nina holden sparse exchangeable graphs limits via graphon processes large networks graph limits american mathematical society providence david aldous exchangeability related topics ecole ete probabilites xiii pages springer berlin heidelberg david mcfarland daniel brown social distance metric systematic introduction smallest space analysis bonds pluralism form substance urban social networks pages john wiley new york katherine faust comparison methods positional analysis structural general equivalences soc networks mcpherson evolution dancing landscape organizations networks dynamic blau space soc forces peter hoff adrian raftery mark handcock latent space approaches social network analysis stat assoc svante janson oliver riordan phase transition inhomogeneous random graphs random struct algor hamed hatami svante janson szegedy graph properties graph limits entropy sourav chatterjee varadhan large deviation principle random graph eur comb sourav chatterjee persi diaconis estimating understanding exponential random graph models ann stat charles radin lorenzo sadun singularities entropy asymptotically large simple graphs stat phys christian borgs jennifer chayes vera katalin vesztergombi convergent sequences dense graphs subgraph frequencies metric properties testing adv math ginestra bianconi entropy network ensembles phys rev barvinok hartigan number graphs random graph given degree sequence random struct algor jan grandell mixed poisson processes chapman london remco van der hofstad random graphs complex networks cambridge university press cambridge jean pierre aubin applied functional analysis john wiley sons inc new york gelfand fomin calculus variations dover publications new york
10
aug qiral high level language lattice qcd code generation denis barthou gilbert grosdidier university bordeaux bordeaux france laboratoire orsay france grosdid michael kruse olivier pene inria saclay alchemy orsay france laboratoire physique orsay france claude tadonki mines centre recherche informatique fontainebleau france abstract quantum chromodynamics qcd theory subnuclear physics aiming modeling strong nuclear force responsible interactions nuclear particles lattice qcd lqcd corresponding discrete formulation widely used simulations computational demand lqcd tremendous played role history supercomputers also helped defining future designing efficient lqcd codes scale well large probably hybrid supercomputers requires express many levels parallelism explore different algorithmic solutions algorithmic exploration key efficient parallel codes process hampered necessary coding effort present paper language qiral high level expression parallel algorithms lqcd parallelism expressed mathematical structure sparse matrices defining problem show expressions algorithmic preconditioning formulations parallel code automatically generated separates algorithms mathematical formulations lqcd belong field physics effective orchestration parallelism mainly related compilation optimization parallel architectures introduction quantum chromodynamics qcd theory strong subnuclear interactions lattice qcd lqcd numerical approach solve qcd equations lqcd simulations extremely demanding terms computing power require large parallel distributed machines heart simulation inversion problem large sparse also sparse implicit matrix called dirac matrix known vector unknown model reality accurately size around elements least current simulations supercomputers handle sizes elements high performance lqcd codes multicores multinode hybrid using gpus architectures quite complex design performance results interplay algorithms chosen solve inversion parallelism orchestration target architecture exploring space algorithms able solve inversion mixed precision algorithms aggressive preconditioning deflation techniques qiral language lqcd code generation barthou combination essential order reach higher levels performance many handtuned parallel libraries dedicated lqcd chroma quda propose building blocks algorithms ease development parallelism stem algorithm structure sparse matrices involved computation however number shortcomings library approach first significant evolution hardware requires tune library new architecture lead change grain parallelism adaptation cache sizes instance change data layout sparse matrices lqcd particular matrix regular structure library functions take advantage combining different algorithms preconditioners leads structural changes matrices supported libraries two limitations hinder considerably time necessary develop code propose paper language qiral model lqcd problems enable automatic parallel code generation novel approach makes possible exploration test new algorithms lqcd sparse matrices lqcd structured using algebra operators dense matrices key idea use structure express parallelism extends ideas proposed spiral library generator dsp algorithms complex codes matrices problem formulation first presented section language described section implementation section lattice qcd description principle lqcd discretize describe theory resulting lattice lattice contain large number sites since fine grained describe large enough volumes lqcd exists since huge progress hardware software algorithms achieved enough really sit parameters nature light quarks still described quarks heavier nature heavy quark described lighter implies systematic errors results break limitation several orders magnitudes needed computing power related resources demand new hardware several level parallelism make coding complicated goal provide tools helping face new situation heaviest part lqcd calculation generate large sample large files field configurations according algorithm named hybrid markovian process every step takes several hours powerful computers algorithm complex spends time inverting large linear systems depend field configuration typically one deal matrices billions lines columns second heaviest part calculation compute quark propagators boils solving large linear systems type therefore concentre task matrices involved computation represent stencil computations vertex updated value neighbours structure therefore regular statically known used following section language qiral high level language description algorithms definition matrices vectors equations specific lqcd objective algorithmic part define algorithms independently expression sparse matrices used lqcd qiral language lqcd code generation barthou dirac figure definition dirac matrix lattice qiral objective system definitions equations define properties structure sparse matrices order able find parallelism algorithmic part uses straightforward operational semantics equational part defines rewriting system sake simplicity qiral subset latex meaning qiral input either compiled pdf file rendering purposes compiled executable codes algorithms definitions correspond different predefined latex environments variables types variables constants used qiral vectors denoted type length matrices size complex real numbers besides counted loops indexed variables type index iterating domain denoted indexset index variables potentially multidimensional integers particular case index value functions number argument types also defined size vector defined size index set size index set left undefined vector index set denotes subvector indexed matrices vectors manipulated defined instead matrices vectors built using either constant predefined values identity iis identity index set operators transposition conjugate direct sum tensor product tensor product direct sum defined aij two operators define parallel operations help defining sparse matrices lqcd definition equations lqcd knowledge given system definition matrices vectors system equations values particular structure sparse matrices explicitly defining elements located given definitions structure propagated algorithms figure defines sparse dirac matrix declarations index sets shown well declaration constant matrices used direct sums indexed vertices lattice define matrices diagonals shifted columns permutation matrix jld predefined algorithms preconditioners many algorithms proposed solving equation krylov methods figure presents two algorithms qiral first variant conjugate gradient representative iterative methods second one schur complement method preconditioner computing solution equation achieved computing two solutions smaller problems keyword match helps define algorithm rewriting rule whenever statement match condition found rewritten algorithm schur qiral language lqcd code generation barthou preconditioner rewriting performed condition requirement fulfilled invertible using identities defined equation system rewriting system proves indeed requirement valid matrix dirac matrix projection matrix keeping vertices lattice even coordinates preconditioning qiral expressive enough represent different variants conalgo conjugate gradient normal resolution cgnr input output match var algo definition schur complement method schur input output match var require invertible figure two algorithms left variant conjugate gradient method right preconditioner schur complement method jugate gradient bicgstab methods restart gcr methods proposed physicists qiral parallel code qiral compiler takes input file describing algorithms equations generates parallel code phases described presented detail following figure overview qiral compilation chain rewriting system generation current latex implementation qiral input translated set rewriting rules equations using maude framework maude rewriting system handling equational rewriting rules reflection qiral language lqcd code generation barthou precisely definitions translated first phase equations conditional equations algorithms rewriting rules output file contains domain specific definitions dirac operator figure desired program case algorithms preconditioners want apply definitions semantic similar rewriting rule left hand side rewritten right hand side ensure existence normal form system equation convergent confluent far implementation use tool automatic checker algorithms translated conditional rules applied prerequisite checked left hand side rule correspond match clause algorithm right hand side rewriting system merged another one defining general algebraic properties code generation rewriting rules code optimizations main phase qiral compiler therefore described rewriting system following works stratego qiral static strongly type language vectors matrices defined index sets type checking first analysis achieved main phase applying algorithms simplifications user provides list algorithms compose initial program focused equation exploring different algorithms sequences algorithms boils change list given qiral compiler checking algorithm requirements automatically achieved rewriting system using equational theory provided system definitions equational system using equations coming lqcd definitions algebraic properties involving different operators simplifies terms equal zero result program matrix algorithms replaced either dirac matrix matrix obtained transformation preconditioners loop generation parallelization step initial statement replaced algorithm statements directly using dirac matrix preconditioned version assignment statements vector assignments values vertices lattice modified one statement matrices still described using tensor products direct sums loops obtained transforming indexed sums products loops sequences instance direct sum operator indexed lattice definition dirac matrix figure transformed parallel loop elements lattice loop parallel due meaning lattice either nested loops created one single linearized loop created choice depends parameter qiral compiler usual compiler transformations used computing dependences fusioning loops applying scalar promotion optimizations end phase openmp code produced essentially identifying parallel loops set private variables transformations driven rewriting strategies using reflection maude matching library calls resulting code still uses high level operators dense matrices tensor products dense matrices product operators replaced library calls step sufficient define library expression computes rewriting rule defined lion set library functions used validation purposes finally phase mostly syntactic rewriting transforms output function function compiled program skeleton runtime initializes data calls generated code stores result custom functions also defined qiral language lqcd code generation barthou epsilon pragma omp parallel private spnaddspn matmulspn tensor uup gmsubgm gmdiag sup idx spnaddspn matmulspn tensor uup gmsubgm gmdiag sup idx spnaddspn matmulspn tensor uup gmsubgm gmdiag sup idx matmulspn tensor uup gmsubgm gmdiag sup idx cplmulspn kappa spnaddspn matmulspn tensor udn gmaddgm gmdiag sdn idx spnaddspn matmulspn tensor udn gmaddgm gmdiag sdn idx spnaddspn matmulspn tensor udn gmaddgm gmdiag sdn idx matmulspn tensor udn gmaddgm gmdiag sdn idx cplmulspn kappa figure sample output code produced cgnr algorithm since undefined operations qiral input appear functions calls one application call blas routines instead letting qiral implement instance one defines rule says dgemm stage concerned correctness output rather efficiency code purpose steps automatically generated codes different algorithms preconditionings show different convergence speed shown figure figure convergence speed lattice size different methods conjugate gradient normal error cgne normal resolution cgnr biconjugate gradient bicgstab modified conjugate gradient preconditioning conclusion paper presented overview qiral high level language automatic parallel code generation lattice qcd codes language based algorithmic specifications mathematical definitions mathematical objects used computation qiral language lqcd code generation barthou contribution short paper show representation makes possible automatic generation complex lqcd parallel code parallelism directly stems structure sparse regular matrices used lqcd initial matrix represents stencil computation qiral able manipulate complex structures obtained preconditioning instance unlike pochoir approach similar one proposed ashby enables user define new equations definitions qiral compiler able keep information transformations resulting preconditioners algorithms tensor product direct sum operators translated parallel loops lead openmp code generation way algorithmic exploration key higher levels performance freed constraints costs parallel tuning besides generation distributed codes communications within reach heterogeneous architectures cell gpus work automatic optimization required order reach levels performance previous works cell gpu references ashby kennedy boyle cross component optimisation high level categorybased language marco danelutto marco vanneschi domenico laforenza editors europar parallel processing volume lncs pages springer clark babich barros brower rebbi solving lattice qcd systems equations using mixed precision solvers gpus computer physics communications manuel clavel fransisco steven eker patrick lincoln narciso meseguer jose quesada maude system intl conf rewriting techniques applications pages london francisco meseguer checker tool conditional equational maude specifications rewriting logic applications volume lncs pages springer robert edwards balint joo chroma software system lattice qcd khaled ibrahim francois bodin implementing operator cell broadband engine intl conf supercomputing pages new york usa acm martin local coherence deflation low quark modes lattice qcd high energy physics markus franz franchetti yevgen voronenko encyclopedia parallel computing chapter spiral springer claude tadonki gilbert grodidier olivier pene efficient cell library lattice quantum chromodynamics sigarch comput archit news january yuan tang rezaul alam chowdhury bradley kuszmaul luk charles leiserson pochoir stencil compiler symp parallelism algorithms architectures page eelco visser zine abidine benaissa core language rewriting electronic notes theoretical computer science rewriting logic applications pavlos vranas matthias blumrich dong chen alan gara mark giampapa philip heidelberger valentina salapura james sexton ron soltz gyan bhanot massively parallel quantum chromodynamics ibm research development pages frank wilczek qcd tells nature listen
6
forward stochastic reachability analysis uncontrolled linear systems using fourier transforms abraham vinod electrical comp eng university new mexico albuquerque usa feb baisravan homchaudhuri electrical comp eng university new mexico albuquerque usa abstract propose scalable method forward stochastic reachability analysis uncontrolled linear systems affine disturbance method uses fourier transforms efficiently compute forward stochastic reach probability measure density forward stochastic reach set method applicable systems bounded unbounded disturbance sets also examine convexity properties forward stochastic reach set probability density motivated problem robot attempting capture stochastically moving target demonstrate method two simple examples traditional approaches provide approximations method provides exact analytical expressions densities probability capture ccs concepts computation stochastic control optimization convex optimization methodologies control methods computational control theory keywords stochastic reachability fourier transform convex optimization introduction reachability analysis dynamical systems stochastic disturbance input established tool provide probabilistic assurances safety performance applied several domains including motion planning robotics spacecraft docking fishery management mathematical finance autonomous survelliance computation stochastic reachable viable sets formulated within dynamic programming framework generalizes stochastic hybrid systems suffers permission make digital hard copies part work personal classroom use granted without fee provided copies made distributed profit commercial advantage copies bear notice full citation first page copyrights components work owned others acm must honored abstracting credit permitted copy otherwise republish post servers redistribute lists requires prior specific permission fee request permissions permissions hscc april pittsburgh usa acm isbn doi http meeko oishi electrical comp eng university new mexico albuquerque usa oishi curse dimensionality recent work computing stochastic reachable viable sets aims circumvent computational challenges approximate dynamic programming gaussian mixtures particle filters convex optimization methods applied systems far beyond scope possible dynamic programming scalable larger realistic scenarios focus particular forward stochastic reachable set defined smallest closed set covers reachable states lti systems bounded disturbances established verification methods adapted overapproximate forward stochastic reachable set however methods return trivial result unbounded disturbances address forward stochastic reach probability measure provides likelihood reaching given set states present scalable method perform forward stochastic reachability analysis lti systems stochastic dynamics method compute forward stochastic reachable set well probability measure show fourier transforms used provide exact reachability analysis systems bounded unbounded disturbances provide iterative analytical expressions probability density show explicit expressions derived cases motivated particular application pursuit dynamic target scenario may arise rescue lost first responder building fire capture uav urban environment situations solutions adversarial target based differential game accommodate bounded disturbances unknown stochasticity conservative target seek scalable solutions synthesize optimal controller nonadversarial scenario exploiting forward reachable set probability measure target analyze convexity properties forward stochastic reach probability density sets propose convex optimization problem provide exact probabilistic guarantee success corresponding optimal controller main contributions paper method efficiently compute forward stochastic reach sets corresponding probability measure linear systems uncertainty using fourier transforms convexity erties forward stochastic reach probability measure sets convex formulation maximize probability capture target stochastic dynamics using forward stochastic reachability analysis paper organized follows define forward stochastic reachability problem review properties probability theory fourier analysis section section formulates forward stochastic reachability analysis linear systems using fourier transforms provides convexity results probability measure stochastic reachable set apply proposed method solve controller synthesis problem section provide conclusions directions future work section preliminaries problem formulation section review properties probability theory fourier analysis relevant discussion setup problems detailed discussions probability theory see fourier analysis see denote random vectors bold case vectors overline preliminaries random vector defined probability space given sample space provides collection measurable sets defined sample space either countable discrete random vector uncountable continuous random vector paper focus absolutely continuous random variables absolutely continuous random vector probability measure defines probability density function given borel set short dzp use concept support define forward stochastic reach set support random vector smallest closed set occur almost surely formally support random vector unique minimal closed set supp supp supp section alternatively denoting euclidean ball radius centered ball equivalent via proposition supp ball ball continuous support density section denoting closure set using supp support characteristic function random vector probability density function exp denotes fourier transformation operator given density function computed denotes inverse fourier transformation operator short define spaces measurable functions norm density denotes absolute value space absolutely integrable functions space functions fourier transformation defined functions functions since probability densities definition cfs exist every probability density section let random vectors densities cfs respectively definition let section also supp supp lemma denotes convolution minkowski sum matrices exp section equation independent vectors probability density section marginal probability density group components selected random vector obtained setting remaining fourier variables zero section additional assumption probability density random variable results along properties satisfies following property fourier transform preserves inner product theorem given function fourier transform probability density denotes complex conjugation lemma proof follows property section since probability densities real functions problem formulation consider linear system state disturbance matrices appropriate dimensions let given initial state finite time horizon disturbance set uncountable set either bounded unbounded random vector defined probability space random vector assumed absolutely continuous known density function disturbance process assumed random process independent identical distribution iid dynamics quite general includes affine noise perturbed lti systems known statefeedback based inputs additional affine term include affine noise perturbed lti systems known controllers time random tor defined sequence random vectors given random vector defined product space iid assumption random process state random process random vector instant defined probability space probat bility measure induced denote random process originating let denote realization random vector iterative method forward stochastic reachability analysis fsr analysis given section however systems perturbed continuous random variables numerical implementation iterative approach becomes erroneous larger time instants due iterative numerical evaluation improper integrals motivating need alternative implementable approach problem given dynamics initial state construct analytical expressions time instant smallest closed set covers reachable states forward stochastic reach set probability measure forward stochastic reach set forward stochastic reach probability measure require iterative approach additionally interested applying forward stochastic reachable set fsr set probability measure fsrpm problem capturing target specifically seek convex formulation problem capturing target requires convexity fsr set concavity objective function defined probability successful capture problem finite time horizon find convex formulation maximization probability capture target known stochastic dynamics initial state resulting optimal controller deterministic robot must employ probability capture problem characterize sufficient conditions logconcavity fsrpm convexity fsr set forward stochastic reachability analysis existence forward stochastic reach probability density fsrpd systems form demonstrated section probability state reaching set time starting defined using fsrpm since disturbance set uncountable focus computation fsrpd use link fsrpm discussed countable case define forward stochastic reach set fsr set support random vector initial condition continuous fsrpd fsreach lemma fsreach proof follows note disturbance set unbounded definition fsr set might trivially become also uncountable probability state taking particular value zero therefore superlevel sets fsrpd interpretation countable case however given fsrpd obtain likelihood state reach particular set interest via fsr set via iterative method reachability analysis extend iterative approach fsr analysis proposed nonlinear systems discrete random variables linear system continuous random variables discussion inspired part section helps develop proofs presented later assume system matrix invertible assumption holds systems discretized via euler method property det example function chapter probability density random vector use property obtain corollary equation special case result section extend fsr set computation presented following lemma lemma closed disturbance set system initial condition fsreach fsreach proof follows property lemma allows use existing reachability analysis schemes designed bounded disturbance models overapproximating fsr sets also provide iterative method exact fsr analysis note improper integral must solved iteratively densities whose convolution integrals difficult obtain analytically would need rely numerical integration quadrature techniques numerical evaluation improper integrals computationally expensive section moreover quadratures method become increasingly erroneous larger values due iterative definition disadvantages motivate need solve problem approach provides analytical expressions fsrpd thereby reduce number quadratures required iterative method performs well discrete random vectors discretization computation exact however clearly true disturbance set uncountable efficient reachability analysis via characteristic functions employ fourier transformation provide analytical expressions fsrpd instant method involves computing single integral time instant interest opposed iterative approach subsection also show certain disturbance distributions like gaussian distribution explicit expression fsrpd obtained property iid assumption random process random vector seen random vector concatenates disturbance random process theorem time instant initial state fsrpd given exp proof follows property theorem provides analytical expression fsrpd theorem holds even relax identical distribution assumption random process timevarying independent disturbance process provided known using property theorem also easily extended include affine noise perturbed lti systems known controllers note computation fsrpd via theorem require gridding state space hence mitigating curse dimensionality associated traditional approaches structure known fourier transforms theorem used provide explicit expressions fsrpd see proposition systems inverse fourier transform known evaluation done via quadrature techniques handle improper integrals alternatively improper integral approximated quadrature appropriately defined proper integral chapter systems performance affected scalability quadrature schemes dimension however theorem still requires single quadrature time instant interest hand iterative method proposed subsection requires quadratures ndimensional resulting higher computational costs degradation accuracy increases one example known fourier transforms arises gaussian distributions use theorem derive explicit expression fsrpd perturbed gaussian random vector note fsrpd case also computed using properties linear combination gaussian random vectors section theory filter proposition system trajectory initial condition noise process proof multivariate gaussian random vector section exp iid assumption property exp exp matrix entries identity matrix dimension corollary see exp exp exp equation multivariate gaussian random vector corollary obtain using depending system dynamics time instant interest rank cases support random vector restricted sets lower dimension section certain marginal densities functions example see turning effect disturbance setting theorem yields trajectory corresponding deterministic system used relation exp chapters theorem provide analytical expression fsrpd fsr set respectively thereby solve problem density function describing stochastics perturbation convexity results reachability analysis computational tractability useful study convexity properties fsrpd fsr sets define random vector density lemma lemma distribution distribution theorem distribution invertible fsrpd logconcave every proof prove theorem via induction first need show base case true need show lemma density since affine transformations preserve section assume induction since convolution preserves section lemma complete proof corollary distribution fsreach system convex every proof follows theorem theorem corollary solve problem reaching target stochastic dynamics section leverage theory developed paper solve problem efficiently consider problem controlled robot capture stochastically moving target denoted goal robot robot controllable linear dynamics robot uncontrollable linear dynamics perturbed absolutely continuous random vector robot said capture robot robot inside set defined around current position robot seek controller independent current state robot robot maximizes probability capturing robot within time horizon information available solve problem position robots deterministic dynamics robot perturbed dynamics robot density perturbation consider environment approach easily extended higher dimensions perform fsr analysis inertial coordinate frame model robot point mass system discretized time state position input input matrix sampling time define control policy depends initial condition sequence control actions given initial condition let denote set feasible control policies input vector consider two cases dynamics robot point mass dynamics double integrator dynamics discretized time perturbed absolutely continuous random vector former case presume velocity drawn bivariate gaussian distribution state position random vector probt ability space pxgg disturbance matrix known initial state robot stochastic velocity mean vector covariance matrix given latter case acceleration direction independent exponential random variable exp exp state position velocity random vector probability space xdi xdi pxgg xdi known initial state robot stochastic acceleration following probability density exp defined using property exponential given section formally robot captures robot time captureset words capture region robot captureset robot optimization problem solve problem proba maximize subject decision variables time capture control policy objective function gives probability robot capturing robot initial state control policy determines unique every using observation define objective function obtain using solution problem theorem captureset captureset problem proba equivalent see section maximize probb subject reachr decision variables time capture position robot time define reach set robot time reachr several deterministic reachability computation tools available computation reachr like mpt formulate problem probb convex optimization problem based results developed subsection lemma input space convex forward reach set reachr convex proposition distributions captureset convex proof theorem know every proof follows since integration function convex set section remark densities since multivariate gaussian density exponential distribution gamma distribution shape parameter logconcave preserved products sections section proposition lemma ensure probc minimize subject log reachr convex decision variable problem probc equivalent convex optimization problem partial maximization respect problem probb since transformed original objective function monotone function yield convex objective constraint sets identical section solve problem probb solving problem probc time instant obtain compute maximum resulting finite set get since problem probb could approach ensures global optimum found note order prevent taking logarithm zero add additional constraint problem probc constraint affect convexity small positive number proposition fact logconcave functions quasiconcave quasiconcave functions convex superlevel sets sections using optimal solution problem probb compute controller drive robot solving problem probd defining minimize probd subject decision variable objective function provides feasible controller provides controller policy minimizes control effort ensuring maximum probability robot capturing robot achieved solving optimization problems probb probd answers problem approach solving problem based solution problem fourier transform based fsr analysis problem convexity results fsrpd fsr sets presented paper contrast iterative approach fsr analysis presented subsection would yield erroneous larger values due heavy reliance quadrature techniques additionally traditional approach dynamic programming based computations would prohibitively costly large fsr sets encountered problem due unbounded disturbances numerical implementation work discussed subsection robot point mass dynamics solve problem probb system given disturbance set lemma system given initial state robot fsreachg every proof follows proposition proposition provides fsrpd lemma provides fsr set system probability successful capture robot computed using since fsrpd available implement problem parameters capture region robot box centered position robot edge length edges parallel axes captureset box convex set use problem probd figure shows evolution mean position robot optimal capture position robot time instants contour plots rotated ellipses since diagonal matrix mean position robot moves straight line trajectory input always optimal time capture optimal capture position time time time figure solution problem probc robot dynamics validation via simulations optimal capture time likelihood capture corresponding probability robot capturing robot note instant reach set robot cover current mean position robot figure reach set covers mean position robot next time instant uncertainty causes probability successful capture reduce figure counterintuitively attempting reach mean always best figure shows optimal capture probabilities obtained solving problem probc dynamics time time robot double integrator dynamics consider complicated capture problem disturbance exponential hence tracking mean little relevance mode global maxima density robot dynamics realistic solve problem probb system given disturbance set based mean stochastic acceleration mean position robot parabolic trajectory due double integrator dynamics opposed linear trajectory seen subsection also case explicit expression fsrpd like proposition using theorem obtain explicit expression fsrpd utilize lemma evaluate capturepr analogous lemma proposition characterize fsr set lemma fsrpd proposition use lemma obtain overapproximation fsr set due unavailability fsrpd use lemma system given initial state robot fsreachg every fsreachg figure snapshots optimal capture positions robots point mass dynamics blue line shows mean position trajectory robot contour plot characterizes blue box shows reach set robot time reachr red box shows capture region centered captureset proof dynamics since rank every elements nonnegative lemma completes proof time infeasible time figure solution problem probc robot dynamics validation via simulations optimal capture time capture probability proposition fsrpd robot dynamics exp time fsrpd robot time proof apply theorem dynamics solve problem probb define since interested position robot require marginal density fsrpd position subspace robot property unlike case gaussian disturbance explicit expressions fsrpd marginal density unavailable since fourier transform standard lemma time figure snapshots optimal capture positions robots double integrator dynamics blue line shows mean position trajectory robot contour plot characterizes via simulation blue box shows reach set robot time reachr red box shows capture region centered captureset proof inequality section also completing proof via induction using similar proof theorem note functions closed convolution theorem lemma proof lemma similar subsection define convex capture region captureset box state robot define indicator function corresponding box centered edge length captureset zero otherwise fourier transform product sinc functions shifted follows property chapter exp sin sin clearly lemmas define equation evaluated using use opposed due unavailability explicit expression numerical evaluation inverse fourier transform compute require two quadratures resulting higher approximation error compared implement problem following parameters use problem probd figure shows evolution mean position robot optimal capture position robot time instants every contour plots estimated via monteg carlo simulation since evaluating via grid computationally expensive note mean position robot coincide contrast problem discussed mode subsection optimal time capture optimal capture position corresponding probability robot capturing robot figure figure shows optimal capture probabilities obtained solving problem probc dynamics validation results numerical implementation analysis computations paper performed using matlab intel core cpu clock rate ram matlab code work available http solved problem probc using matlab functions fmincon optimization mvncdf compute objective case subsection integral compute objective case subsection max compute global optimum problem probb sections used mpt reachable set calculation solved problem probd using cvx using lemma fsr sets restrict search solving problem probc geometric computations done facet representation computed initial guess optimization problem probc performing euclidean projection mean feasible set using cvx section since computing objective costly operation saved significant computational time montecarlo simulation used particles offline computations done either cases overall computation problem probb probd case subsection took seconds since proposition provides explicit expressions fsrpd evaluation fsrpd given point takes millseconds average case subsection overall computation took seconds minutes numerical evaluation improper integral major cause increase runtime evaluation fsrpd given point using takes seconds runtime accuracy depend heavily point well bounds used integral approximation however evaluation using much faster seconds decaying sinc function decaying much faster decaying properties integrand cfs general permits approximating improper integrals proper integral suitably defined finite bounds tradeoff accuracy computational speed common quadrature techniques dictates choice bound detailed analysis various quadrature techniques computational complexity error analysis found chapter conclusions future work paper provides method forward stochastic reachability analysis using fourier transforms method applicable uncontrolled stochastic linear systems fourier transforms simplify computation mitigate curse dimensionality associated gridding state space also analyze several convexity results associated fsrpd fsr sets demonstrate method problem controller synthesis controlled robot pursuing stochastically moving target future work includes exploration various quadrature techniques like particle filters quadratures extension model predictive control framework discrete random vectors countable disturbance sets multiple pursuer applications also investigated acknowledgements authors thank hayat discussions fourier transforms probability theory reviewers insightful comments material based upon work supported national science foundation grant numbers opinions findings conclusions recommendations expressed material authors necessarily reflect views national science foundation references baisravan homchaudhuri abraham vinod meeko oishi computation forward stochastic reach sets application stochastic dynamic obstacle avoidance proc american control accepted nick malone kendra lesser meeko oishi lydia tapia stochastic reachability based motion planning multiple moving obstacle avoidance proc hybrid syst comput control pages kendra lesser meeko oishi scott erwin stochastic reachability control spacecraft relative motion proc ieee conf decision control pages sean summers john lygeros verification discrete time stochastic hybrid systems stochastic decision problem automatica nikolaos kariotoglou davide raimondo sean summers john lygeros stochastic reachability framework autonomous surveillance cameras european control pages alessandro abate maria prandini john lygeros shankar sastry probabilistic reachability safety controlled discrete time stochastic hybrid systems automatica alessandro abate saurabh amin maria prandini john lygeros shankar sastry computational approaches reachability analysis stochastic hybrid systems proc hybrid syst comput control pages nikolaos kariotoglou sean summers tyler summers maryam kamgarpour john lygeros approximate dynamic programming stochastic reachability european control pages nikolaos kariotoglou kostas margellos john lygeros computational complexity generalization properties coupled scenario programs syst control giorgio manganini matteo pirotta marcello restelli luigi piroddi maria prandini policy search optimal control markov decision processes novel iterative scheme ieee trans pages michal kvasnica juraj holaza deepak ingole reachability analysis control synthesis uncertain linear systems mpt ifac symp robust control alex kurzhanskiy pravin varaiya ellipsoidal toolbox technical report eecs department university california berkeley antoine girard reachability uncertain linear systems using zonotopes proc hybrid syst comput control pages geoffrey hollinger sanjiv singh joseph djugash athanasios kehagias efficient search moving target int robotics research vijay kumar daniela rus sanjiv singh robot sensor networks first responders ieee pervasive computing christopher geyer active target search uavs urban environments proc ieee int conf robotics pages ian mitchell claire tomlin level set methods computation hybrid systems proc hybrid syst comput control pages claire tomlin john lygeros shankar sastry game theoretic approach controller design hybrid systems proc ieee claire tomlin ian mitchell alexandre bayen meeko oishi computational techniques verification hybrid systems proc ieee olivier bokanowski nicolas forcadel hasnaa zidani reachability minimal times state constrained nonlinear problems without controllability assumption siam control optimization haomiao huang jerry ding wei zhang claire tomlin differential game approach ieee trans control syst patrick billingsley probability measure wiley new york edition john gubner probability random processes electrical computer engineers cambridge university press new york cambridge harald mathematical methods statistics princeton university press edition sudhakar dharmadhikari kumar unimodality convexity applications elsevier elias stein guido weiss introduction fourier analysis euclidean spaces volume princeton university press terence tao analysis hindustan book agency edition penot analysis concepts applications springer edition andrzej lasota michael mackey chaos fractals noise stochastic aspects dynamics volume springer science business media ron bracewell fourier transform applications william press saul teukolsky william vetterling brian flannery numerical recipes art scientific computing cambridge university press new york usa edition peter dorato vito cerone chaouki abdallah control introduction simon schuster chow teicher probability theory independence interchangeability martingales springer texts statistics springer new york stephen boyd lieven vandenberghe convex optimization cambridge university press cambridge new york martin herceg michal kvasnica colin jones manfred morari toolbox european control pages http michael grant stephen boyd cvx matlab software disciplined convex programming version http
3
stochastic geometry modeling analysis wireless networks seyed mohammad behrooz makki martin haenggi fellow ieee masoumeh senior member ieee dec tommy svensson senior member ieee abstract paper develops stochastic approach modeling analysis singleand wireless networks first define finite homogeneous poisson point processes model number locations transmitters confined region wireless network study coverage probability reference receiver two strategies receiver served closest transmitter among transmitters serving transmitter selected randomly uniform distribution second using matern cluster processes extend model analysis wireless networks receivers modeled two types namely receivers distributed around cluster centers transmitters according symmetric normal distribution served transmitters corresponding clusters receivers hand placed independently transmitters served transmitters cases link distance distribution laplace transform interference derived also derive lower bounds interference wireless networks impact different parameters performance also investigated index terms stochastic geometry coverage probability clustered wireless networks poisson point process matern cluster process dep electrical engineering sharif university technology tehran iran mnasiri makki svensson dep signals systems chalmers university technology gothenburg sweden haenggi dept electrical engineering university notre dame usa mhaenggi work supported part research office sharif university technology grant research link project green communications national science foundation grant ccf ntroduction wireless networks composed number nodes distributed inside finite region spatial setup appropriate model various millimeter wave communications scenarios indoor hoc networks promising candidate technologies next generation wireless networks setup also useful situations range limit backhaul links connecting transmitters core network cloud radio access networks hand randomness irregularity locations nodes wireless network led growing interest use stochastic geometry poisson point processes ppps accurate flexible tractable spatial modeling analysis comparison wireless networks infinite regions mostly modeled infinite homogeneous ppp hppp def modeling performance analysis wireless networks challenging requires different approaches main challenge finite point process statistically similar different locations therefore system performance depends receiver location even averaging point process stochastic modeling analysis wireless networks modeled binomial point process bpp def well studied bpp model fixed finite number nodes distributed independently uniformly inside finite region prior works focus setup reference receiver placed center circular network region also considering circular region recently developed comprehensive framework performance characterizations reference receiver inside region different transmitter selection strategies considering transmitters fixed altitude networks unmanned aerial vehicles analyzed also studies present outage probability characterizations fixed link inside finite region spite usefulness hppp modeling analysis uniform deployments nodes accurately model deployments nodes may deployed places high user density deployments important take account well correlation may exist locations transmitters receivers accordingly third ation partnership project considered clustered models models based poisson cluster processes pcps sec recently studied heterogeneous networks works network follows thomas cluster process tcp def pcp model also proposed analyzed heterogeneous networks clustered hoc networks modeled using matern cluster process mcp def tcp performance fixed link analyzed contact distance distributions mcp derived paper develop tractable models wireless networks define finite homogeneous poisson point process fhppp model nodes finite region develop framework analysis wireless networks two different strategies first approach referred reference receiver served closest transmitter network second approach refer uniformly randomly selected transmitter connected receiver strategies cover broad range requirements wireless networks instance approach suitable cellular networks scheme relevant hoc networks model wireless networks consisting different wireless networks consider mcp transmitters receivers consider two types closedaccess receivers located around cluster centers transmitters symmetric normal distribution allowed served transmitters corresponding clusters according strategy receivers served transmitters according strategy derive exact expressions coverage probability reference receiver singleand wireless networks different selection strategies types receivers selection strategy type receiver wireless networks characterize laplace transform interference moreover key step coverage probability analysis distributions distance reference receiver serving transmitter derived also derive tight lower bounds interference case wireless networks convenient coverage probability analysis investigate impact different parameters system models performance terms coverage probability spectral efficiency cases higher path loss exponent improves performance however relatively high distances reference receiver center wireless networks higher path loss degrading effect performance also increase distance reference receiver center network decreases chance coverage analysis reveals exists optimal distance location reference receiver center network maximizes coverage probability optimal distance also observed spectral efficiency evaluation also shows broad range parameter settings proposed lower bounds tightly mimic exact results coverage probability work different literature three perspectives first different bpp models fixed number nodes region consider point process suitable finite regions random number nodes allow arbitrary receiver locations second comprehensively study wireless networks using mcp analysis also derive contact distribution function mcp form significantly simpler one thm third propose receivers different transmitter selection strategies rest paper organized follows section describes system models secion iii proposes transmitter selection strategies presents analytical results coverage probability wireless networks including characterizations serving distance distributions interference corresponding lower bounds section presents analytical results coverage probability wireless networks derives related serving distance distributions interferences section presents numerical results finally section concludes paper ystem odel section provide mathematical model system including spatial distribution nodes wireless networks channel model spatial model wireless networks let define fhppp follows definition define fhppp hppp intensity consider wireless network shown fig locations active transmitters modeled fhppp transmitters assumed transmit fig illustration system model wireless networks power simplicity harmony let represents disk centered radius however theoretical results extended arbitrary regions receivers located everywhere loss generality conduct analysis reference receiver located origin define kxo proposed setup well models wireless network confined finite region indoor hoc networks spatial model wireless networks mcp defined follows def definition mcp union offspring points located around parent points parent point process hppp intensity offspring point processes one per parent conditionally independent conditioned offsprings form fhppp intensity disk consider wireless network shown fig locations active transmitters modeled mcp consider two types receivers first type referred receivers served single cluster transmitters receiver distributed according symmetric normal distribution variance around parent point corresponding cluster therefore assuming receiver parent point rayleigh distributed probability density function pdf exp second type referred receivers considers receivers placed independently transmitters served transmitters fig illustration system model finite piece wireless networks proposed setup well models various scenarios follows clustered base stations bss trend cellular networks deploy bss places high user density referred networks also proposed way according setup users likely close cluster bss modeled receivers users stadium mall receivers model users distributed homogeneously independently locations pedestrians cars cloud bss cloud distributed system formed number simple antenna terminals application receivers modeled users license use certain receivers model users flexibility access bss clustered access networks large building may number wifi access points access network meet users demands receivers model users building use access network hand receivers users handoff access networks different buildings clustered networks device typically nearby devices finite region cluster network direct communications strategy considered cellular hoc access contents distributed devices respectively channel model assume path loss rayleigh fading thus received power reference receiver transmitter located common transmit power set loss generality path loss exponent sequence consists exponential random variables mean iii ingle luster ireless etworks section concentrate wireless networks allocate transmitter reference receiver propose selection strategies subsection distance distributions coverage probabilities selection strategies derived subsections respectively however resulting expressions coverage probabilities easy use hence derive lower bound coverage probability strategy subsection selection strategies reference receiver served transmitter provides maximum received power averaged fading model leads closestselection strategy arg min kyk denotes number elements set suitable networks infrastructure downlink cellular networks strategy implies receiver served transmitter whose voronoi cell resides serving transmitter selected randomly uniform distribution among transmitters leads unif unif denotes operation models random allocation receivers transmitters may case networks without infrastructure hoc networks networks also suitable applications content interest receiver available transmitter among transmitters equal probability caching networks ratio sinr reference receiver origin expressed hxq kxq sinrq location serving transmitter denotes interference noise power distinguish strategies represent case strategies respectively notational simplicity let also define kxq serving distance distribution considering strategies subsection derive distributions distance reference receiver serving transmitter distance distributions used later coverage probability analysis let first define cos cos present lemma intersection area two circles lemma consider two circles radii centers separated distance area intersection given cos considering strategy distance reference receiver nearest transmitter larger least one transmitter exists inside transmitter located within letting denote intersection exp exp exp exp exp exp denotes area region also follows fact numbers points ppp disjoint regions independent note intersection whole zero according fig shows illustrations intersection two different cases follows fig illustration intersection case subplot subplot case subplot case given case considering strategy hand distance reference receiver randomly chosen transmitter less transmitter transmitter located within thus transmitter distributed independently uniformly within following cases case case coverage probability subsection distance distribution results obtained used derive coverage probability reference receiver two transmitter selection strategies key step coverage probability derivation strategy obtain interference theorems notational simplicity define denotes gauss hypergeometric function strategy theorem conditioned interference strategy ldic defined exp exp exp defined subsection proof see appendix using conditional interference derived theorem express coverage probability reference receiver strategy pcc sinrc minimum required sinr coverage note coverage probability zero transmitter sinrc meaningfully defined transmitter averaging serving distance hxc exp frd frd pdf obtained conditional coverage probability given link distance expressed hxc exp exp ldic follows hxc exp finally according cases considered subsection ldic given coverage probability obtained exp exp lic pcc exp exp ldic exp exp special case infinite wireless networks coverage probability simplifies result thm strategy theorem interference strategy liu defined exp exp exp exp exp exp proof see appendix interference independent serving distance using theorem following approach coverage probability strategy expressed exp ldiu pcu exp liu exp fig outer bounds two cases lower bounds coverage probability since results derived interference theorems require intensive numerical computations derive tight lower bounds interference lead tractable useful analytical results using bounds lower bounds coverage probability also provided tightness bounds verified numerical results fig obtain lower bounds outer bound region region permits bounds interference note using larger region leads upper bound interference turn lower bound outer region cases shown fig placing center sectors reference receiver case two covering radii considered also case consider sector radii front angle entangled two tangent lines however strategy case achieve tighter bound following regions region including interfering transmitters case sector front angle equal twice intersection angle radii considered case consider two sectors front angle radii front angle radii following corollaries present lower bounds interference strategies corollary interference given serving distance lower bounded licb defined exp exp exp exp proof proof follows approach appendix except disk replaced regions given fig corollary interference lower bounded liub defined exp exp exp exp exp exp proof proof follows approach appendix except disk replaced regions given fig ulti luster ireless etworks section extend analysis wireless networks receivers investigated subsections respectively receivers analysis consider reference receiver origin add cluster intensity disk kxo pdf given network thanks slivnyak theorem thm additional cluster receiver become representative cluster receiver expectation means link performance corresponds average performance links realization network therefore serving transmitter strategy arg min kyk strategy serving transmitter found unif sinr origin distance kxk relative serving transmitter sinr iintra iinter denotes interference caused transmitters inside representative cluster iinter represents interference iintra caused transmitters outside representative cluster given distance receiver center representative cluster kxo distributions distances kxc kxu derived subsection strategies respectively also interference conditioned kxo defined function liintra given theorems strategies relatively interference also characterized following theorem theorem interference iinter zdn liinter exp exp udu exp udu defined cos cos proof see appendix using expressions derived interferences coverage probability strategy calculated deconditioning sinrc exp frv fkxo drdv iintra iinter conditional coverage probability expressed exp lic liinter iintra iinter according cases subsections given exp lvic liinter fkxo drdv exp lvic liinter fkxo drdv exp lvic liinter fkxo drdv finally coverage probability strategy derived procedure given exp lviu liinter fkxo drdv exp lviu liinter fkxo drdv exp liu liinter fkxo drdv receivers consider strategy note strategy case since number transmitters infinite therefore serving transmitter arg min kyk distribution distance reference receiver origin kxt given following theorem theorem cumulative distribution function cdf exp exp exp udu frt exp exp exp udu proof see appendix note contact distribution function mcp also derived using probability generating functional pgfl pcps cor however approach tractable leads result much easier numerically evaluate thm taking derivative frt using leibniz integral rule simplifications pdf obtained udu exp exp exp udu frt udu exp exp exp udu sinr reference receiver located origin hxt sinrt denotes total interference caused transmitters except serving transmitter total interference conditioned serving distance characterized following theorem theorem conditioned total interference udu udu udu exp udu lit exp udu udu exp udu defined respectively also given proof see appendix using theorem procedure subsection coverage probability reference receiver wireless networks found exp lit frt umerical esults iscussion section provide analytical results specific scenarios wireless networks addition discuss results provide key design insights wireless networks consider scenario wireless networks transmitters distributed according fhppp intensity disk radius evaluate coverage probability results derived subsection also evaluate spectral efficiency defined log sinr variance additive white gaussian noise set define normalized relative distance following study impact path loss exponent distance receiver center disk coverage probability spectral efficiency also investigate tightness bounds derived subsection effect path loss exponent coverage probability function minimum required sinr plotted fig strategies considering observed coverage probability improved increasing path loss exponent however higher path loss exponent degrading effect lower sinr exhibits tradeoff increases power desired interfering signals decrease lead increase decrease sinr depending parameters target coverage probability horizontal gap case case effect receiver distance center coverage probability function normalized distance studied fig selection strategies observed depending optimal value distance receiver cases terms coverage probability due fact sinr tradeoff since power desired interfering signals decrease distance receiver center disk increases tightness bounds tightness bounds coverage probability derived subsection evaluated fig cases different selection strategies observed bounds tightly approximate performance broad range sinr thresholds different positions receiver inside outside disk spectral efficiency spectral efficiency function shown fig selection strategies analytically obtain spectral efficiency coverage probability thm log sinrq pcq use observed achieves much higher spectral efficiency different receiver distances also crossing point whereby spectral efficiency improves increases reaching distance receiver location outside disk higher distances reverse happens intuitive lower distances power interfering signals decreases higher path loss exponent note upper bound actual spectral efficiency since formulation assumes transmitter knows fading coefficients transmitters receiver would generous assumption tighter lower bound could found using approach described coverage probability fig coverage probability function sinr threshold denote respectively coverage probability fig coverage probability function normalized distance dominant factor sinr hand higher distances power desired signal increases smaller path loss exponent dominates also may optimal value distance receiver terms spectral efficiency result coverage probability behavior distance wireless networks consider wireless network transmitters form mcp intensity inside disks equal radius whose centers follow hppp intensity receivers distributed normal distribution variance around cluster centers transmitters define normalized standard deviation following effects path loss exponent variance normal distribution receivers assessed theoretical results derived subsections coverage probability fig tightness coverage probability lower bounds denote exact result spectral efficiency use lower bound respectively fig spectral efficiency function normalized distance coverage probability fig coverage probability receiver function threshold effect path loss exponent fig coverage probability function plotted strategies case receivers figure case receivers strategy also considered results presented observed higher path loss exponent improves coverage probability fig coverage probability receiver function normalized standard deviation coverage probability practical minimum required sinrs effect variance normal distribution coverage probability function normalized standard deviation shown fig results presented receivers considering observed coverage probability decreases standard deviation variance normal distribution increases intuitive probability event receiver located farther representative transmitters increases onclusion paper developed comprehensive tractable framework modeling analysis wireless networks suitable different wireless applications considered two strategies reference receiver select serving transmitter wireless network considering two types receivers extended modeling wireless networks composed distributed wireless networks using tools stochastic geometry derived exact expressions coverage probability cases different transmitter selection strategies types receivers wireless networks also proposed tight expressions bounding coverage probability case wireless networks analysis revealed higher path loss exponent improves performance except receiver outside relatively far cluster transmitters addition although increase distance receiver center cluster typically degrades performance exists location optimal performance receiver also strategy significantly outperforms strategy ppendix roof heorem interference given serving distance calculated lic exp kyk exp skyk exp obtained exp follows pgfl ppp thm fact interfering nodes farther away intersection compute two types case two types case convert cartesian polar coordinates fact type denotes special form represented polar coordinates uniquely case two different types case follows type given fig ldic exp zrc simplified ldic exp exp follows replacing calculating corresponding integral based formula uses gauss hypergeometric function type given fig ldic exp exp case observed fig case two different types interference depending whether angle lower boundary within two tangent lines crossing origin included boundary type since intersection angle equal angle tangent lines lic exp exp type lic exp exp ppendix roof heorem interference liu exp kyk exp kyk exp exp exp exp exp exp exp exp fru frd pdf obtained distributions found exp also follows fact conditioned bpp distance kyk point distribution frd follows exp found function mgf poisson random variable mean simplifications according two cases compute integral case fru case fru ppendix roof heorem interference exp liinter exp skz exp skz exp exp skz follows exp comes fact written taken fhppp intensity also follow pgfl ppp order convert inner integral cartesian polar coordinates two cases case kxk kxk kxk skz kxk kxk kxk cos case kxk kxk skz kxk kxk kxk kxk kxk kxk kxk kxk kxk kxk kxk cos therefore converting outer integral cartesian polar coordinates final result written liinter exp exp udu exp udu ppendix roof heorem let define min kyk distance reference receiver closest transmitter cluster parent point cdf min lim min frt min lim lim lim exp exp lim exp exp lim exp exp exp exp exp follows lim due facts function conditioned bpp also obtained conditioning existence transmitter inside cluster comes subsection whereby defined exp exp exp area intersection also note addition follows fact points uniformly distributed found mgf number points poisson mean according fig two cases case kxk bkxk case kxk bkxk kxk kxk kxk kxk kxk kxk kxk kxk bkxk given therefore converting cartesian polar coordinates according cdf found frt exp exp udu exp udu exp udu exp udu frt exp exp udu exp udu exp udu exp udu frt exp exp udu exp udu exp udu exp udu though derived differently leads expression final result obtained simplifications ppendix roof heorem total interference conditioned serving distance lit exp exp exp exp exp follows fact transmitters distance interferers comes exp follow pgfl according theorems following cases inner integral case kxk kxk exp akxk skyk case kxk kxk kxk kxk exp skyk case kxk kxk kxk kxk exp case kxk kxk kxk exp kxk skyk case kxk exp exp kxk case kxk exp therefore converting outer integral cartesian polar coordinates according given lit exp udu udu udu exp udu udu lit exp exp udu udu udu udu exp udu lit exp exp udu udu udu exp udu final result obtained simplifications eferences andrews buzzi choi hanly lozano soong zhang ieee sel areas vol jun boccardi heath lozano marzetta popovski five disruptive technology directions ieee commun vol peng wang lau poor cloud radio access networks insights challenges ieee wireless vol april haenggi andrews baccelli dousse franceschetti stochastic geometry random graphs analysis design wireless networks ieee sel areas vol elsawy hossain haenggi stochastic geometry modeling analysis design cognitive cellular wireless networks survey ieee commun surveys vol quarter elsawy alouini win modeling analysis cellular networks using stochastic geometry tutorial ieee commun surveys vol quarter andrews ganti haenggi jindal weber primer spatial modeling analysis wireless networks ieee commun vol andrews baccelli ganti tractable approach coverage rate cellular networks ieee trans vol andrews gupta dhillon primer cellular network analysis using stochastic geometry arxiv preprint apr available online haenggi stochastic geometry wireless networks cambridge university press torrieri valenti outage probability finite hoc network nakagami fading ieee trans vol venugopal valenti heath interference highly dense millimeter wave networks ieee ita san diego usa afshang dhillon optimal geographic caching finite wireless networks proc ieee spawc edinburgh scotland july chetlur dhillon downlink coverage probability finite network unmanned aerial vehicle uav base stations proc ieee spawc edinburgh scotland july srinivasa haenggi distance distributions finite uniformly random networks theory applications ieee trans veh vol afshang dhillon fundamentals modeling finite wireless networks using binomial point process ieee trans wireless vol may chetlur dhillon downlink coverage analysis finite wireless network unmanned aerial vehicles ieee trans vol july guo durrani zhou outage probability finite wireless networks ieee trans vol guo durrani zhou performance analysis underlay cognitive networks effects secondary user activity protocols ieee trans vol khalid durrani distance distributions regular polygons ieee trans veh vol june valenti torrieri talarico direct approach computing spatially averaged outage probability ieee commun vol july afshang dhillon poisson cluster process based analysis hetnets correlated user base station locations arxiv preprint available online https afshang dhillon chong modeling performance analysis clustered networks ieee trans wireless vol july generation partnership project technical specification group radio access network scenarios requirements small cell enhancements release advancements physical layer aspects suryaprakash fettweis modeling analysis heterogeneous radio access networks using poisson cluster process ieee trans wireless vol ganti haenggi interference outage clustered wireless hoc networks ieee trans wireless vol afshang saha dhillon contact distance distributions matern cluster process ieee commun vol deng zhou haenggi heterogeneous cellular network models dependence ieee sel areas vol flanagan creating cloud base stations keystone multicore architecture texas instruments white paper usa available online http han liu resource allocation wireless networks basics techniques applications cambridge university press gradshteyn ryzhik table integrals series products academic press george mungara lozano haenggi ergodic spectral efficiency mimo cellular networks ieee trans wireless vol may
7
jun polynomials given hilbert function applications alessandra bernardi joachim jelisiejew pedro macias marques kristian ranestad abstract using macaulay correspondence study family artinian gorenstein local algebras fixed symmetric hilbert function decomposition application give new lower bound cactus varieties third veronese embedding discuss case cubic surfaces interesting phenomena occur introduction macaulay established correspondence polynomials artinian local gorenstein algebras particular polynomial dual socle generator artinian local gorenstein algebra paper interpret hilbert function algebra hilbert function corresponding polynomial give description set polynomials given symmetric hilbert function decomposition polynomial ring consider polynomials divided power ring kdp polynomial ring acting contraction see section artinian local gorenstein algebra associated quotient annihilator ideal thus spec spec local gorenstein scheme supported origin space spec application mind apolarity dimension cactus varieties cubic forms cactus varieties generalizations secant varieties definition let projective variety cactus variety cactusr closure union linear spaces spanned length subschemes abuse slightly notation variety since cactus variety often reducible algebraic set interested case embedded third veronese embedding consider like divided power ring kdp polynomial ring acting contraction cubic form multiplication scalars point pure cubes form subvariety least length subscheme whose linear span contains called cactus rank closure set cubic forms cactus rank cactus variety denoted cactusr via contraction action natural homogeneous coordinate ring contains span homogeneous ideal contained classical fact called apolarity lemma motivation subscheme apolar mathematics subject classification primary secondary key words phrases cactus rank artinian gorenstein local algebra bernardi jelisiejew macias marques ranestad apply macaulay correspondence investigate local gorenstein schemes apolar main result following lower bound dimension cactus varieties cubic forms theorem corollary let let third veronese embedding even dim cactusr odd hence assumptions secant variety strictly contained cactus variety secant variety fact inclusion cactusr strict consequence inequality dim side expected dimension secant variety easy parameter count gives upper bound actual dimension secant variety known thanks alexander hirschowitz theorem alexander hirschowitz variety cactusr ambient space see bernardi ranestad observe cactusr see casnati notari cases casnati jelisiejew notari remaining cases link artinian local gorenstein algebras apolar schemes homogeneous form provided fact spec local scheme supported apolar definition minimal length local apolar scheme called local cactus rank link strengthened following result call sum homogeneous terms polynomial degree tail proposition let homogeneous polynomial degree let let scheme minimal length among local schemes supported apolar affine apolar scheme polynomial whose tail equals particularly important problem cactus rank general form minimal cactusr results improve previous known bounds remains major open problem theory refer interested reader iarrobino kanev bernardi ranestad bernardi brachat mourrain step order able compute cactus rank describe structure minimal apolar schemes start considering minimal scheme apolar form decompose scheme supported one point take corresponding decomposition minimal local scheme apolar according proposition one would like invariants local apolar gorenstein schemes parameterizing degreed tails polynomials scheme given invariant iarrobino analysis iarrobino recall section provides one discrete invariant symmetric hilbert function decomposition one wants estimate dimension sets polynomials local cactus rank one needs understand structure polynomials symmetric hilbert function decomposition section use standard exotic forms explain unlucky behavior number variables involved homogeneous summand given polynomial may larger expected hilbert function symmetric decomposition motivated describe family ffm polynomials whose linear partials hilbert function coincide part exotic summand proposition show polynomials ffm isomorphic apolar algebras compute corollary dimension proposition give decomposition polynomial sum polynomial standard form exotic summand section focus local cactus rank proving proposition computing local cactus rank general cubic surface finally section use description order estimate dimension set polynomials symmetric hilbert function decomposition allows estimate dimension cactus variety particular prove lower bound dimension corollary notations main applications consider homogeneous forms kdp dehomogenization kdp consider action polynomial ring contraction otherwise similarly consider action polynomial ring contraction restricted note using notation ordinary powers divided powers unlike usually done literature properties divided power rings see instance iarrobino kanev appendix divided power would written characteristic could used ordinary therefore abuse language call partial polynomial preliminaries begin section presenting macaulay correspondence polynomials artinian gorenstein local rings starting point theory macaulay correspondence let algebraically closed characteristic consider divided power ring kdp consider action polynomial ring subsection let parts respectively respect action classically known apolarity natural dual spaces dual bases annihilator polynomial degree ideal denote quotient local artinian gorenstein ring see bernardi jelisiejew macias marques ranestad iarrobino kanev lemma fact generated artinian image generates unique maximal ideal local furthermore socle annihilator maximal ideal namely gorenstein addition form homogeneous ideal therefore artinian gorenstein graded local ring symmetric decomposition hilbert function polynomial consider polynomial kdp let annihilator respect contraction shall interpret hilbert function local artinian gorenstein quotient terms space partials polynomial particular recall interpret iarrobino analysis hilbert functions associated graded algebras symmetric decomposition apply analysis next section characterize polynomials given hilbert function local artinian gorenstein quotient ring naturally isomorphic space partials following iarrobino consider hilbert functions graded rings associated two let maximal ideal deg associated graded ring whose hilbert function denote induces following sequence ideals let consider respective hilbert functions decomposes sum check following important result structure modules proposition iarrobino theorem satisfy following reflexivity condition particular hilbert function symmetric thus hilbert function symmetric decomposition possible symmetric decompositions hilbert function restricted fact partial sums symmetric decomposition hilbert functions suitable quotients corollary iarrobino section hilbert function satisfies particular every partial sum degree hilbert function generated iarrobino listed possible symmetric decompositions hilbert functions rings dim see iarrobino section interpret ideal module terms space partials interpretation depends isomorphism thus spaces let subspace partials degree image map precisely mapped degree integral function dimk dimk coincides hilbert function hand corresponds order call order smallest degree homogeneous term denote ord call order partial largest order thus image simply space partials order least isomorphism allows interpret vector space parameterizing partials degree order modulo partials lower degree larger order precisely let subspace partials degree order least dimk dimk notation denote symmetric decomposition hilbert function bernardi jelisiejew macias marques ranestad consider space linear forms partials lin linear subspaces lin partial order least easily see isomorphism lin dimk lin dimk lin obtain canonical subspaces lin lin lin lin example let space partials generated elements following table generators arranged degree next symmetric decomposition hilbert function generators space partials degree hilbert function decomposition degree instance partial order since obtained attained higher order element generator lin lin lin next section shall enumerate polynomials given hilbert function using symmetric decomposition purpose denote hilbert function values decomposition summands symmetric around corollary partial sum hilbert functions generated degree immediate restrictions functions first hilbert functions positive values satisfy macaulay growth condition macaulay expansion example possible hilbert functions decompositions satisfy macaulay growth conditions following standard forms exotic forms point analysis would like precise description polynomials symmetric hilbert function decomposition purpose deal fact number variables involved homogeneous summand given polynomial may larger expected hilbert function explained appearance call exotic summands analyze role description polynomials given hilbert function let start examples clarifying kind phenomena treat standard exotic examples let local artinian gorenstein algebra explained represented quotient polynomial ideal unique action unit clearly choice unique section wish shed light choice made example consider ring polynomials kdp also note occur partial polynomial since space partials consider ring polynomials kdp case partial since however occurs degree may surprising linear form partial order one bernardi jelisiejew macias marques ranestad see remainder section common behaviour one observe example partials order occur partials orders occur respectively description standard exotic phenomena referring notation example want distinguish polynomials like standard behavior ones like either one variable occur partials partial whose order match degree corresponding variable end standard forms polynomials intuitively correspond minimal embeddings algebras terms variables appearing related polynomials let moreover let decomposition homogeneous summands section hilbert function symmetric decomposition particular saw dimk lin lin space linear partials order exactly let dimk lin dimension space linear partials order least degree reasons space contained space linear partials seen example linear form may occur partial order less partial first let basis linear forms agrees lin lin lin lin definition let polynomial homogeneous decomposition let symmetric decomposition hilbert function say standard form kdp lin kdp xni choice basis linear space standard forms standardforms kdp lin kdp lin important property standard forms following proposition leading summand partial degree order lies kdp lin proof let leading summand partial degree order partial degree one partial order least therefore lies lin therefore kdp lin may variables appearing show leading summands partials tempting call exotic variables reserve exotic part definition let homogeneous decomposition choose basis exotic summand degree form hxni ikdp degree homogeneous summand written kdp xni thus standard form exotic summands zero example let see work cases example therefore kdp standard form hand kdp standard form fact exotic summand ring get symmetric decomposition therefore check standard form exotic summand existence standard forms presentation let artinian gorenstein algebra one could ask exists presentation standard form case presentation exists whether relations fortunately questions quite satisfactory answers explain need notation notation let denote power series ring obtained completing ideal origin coordinates may interpret subset functionals via pairing otherwise note seen decomposing monomials particular let automorphism induces dual map condition let colength ideal supported origin clearly quotients isomorphic moreover fundamental result every may standard form fact prove may chosen linear part make precise bernardi jelisiejew macias marques ranestad definition let unique maximal ideal let automorphism say order least two mod remark every automorphism induces linear action order two automorphisms precisely act trivially thus form normal subgroup particular inverse order two automorphism also order two theorem existence standard forms let polynomial symmetric hilbert function decomposition automorphism standard form consequently element standardforms automorphism moreover one choose order two proof existence see iarrobino theorem take one compose linear map obtain required order two automorphism let linear automorphism mod automorphism order two let standard form since linear automorphism map simply linear transformation change variables standard form coordinate free preserves standard form particular standard form required automorphism order two also important interesting see explicit description action automorphism recall divided power ring proposition let automorphism let denote let proof see jelisiejew proposition example let illustrate theorem setup example standard form automorphism see linear partials spanned wish pbe order least two must kdp according proposition elements order least two since must degree three implies must mod mod therefore choose automorphism description exotic summands parameterization purposes dimension counts interesting consider families polynomials yielding isomorphic algebras least sharing hilbert symmetric decomposition given polynomial kdp lin consider family ffm kdp lin lin next result gives characterisation elements family use notation leading summand polynomial decomposition homogeneous summands proposition let kdp assume lin let kdp polynomial ffm elements order least two particular ffm algebras isomorphic proof let dimension choose basis vector space linearly independent choose elements let ffm write way kdp proposition tells kdp since cancelled terms must implies form linearly independent set since dimension also since yield algebras hilbert function get addition know variables occur leading summand also proposition polynomials degree partial means moreover order least one deg denote observe otherwise bernardi jelisiejew macias marques ranestad iterating get applying sides equality get must iterating get applying remaining operators way obtain remains show order least two without loss generality suppose order one write ord let since partial exists note constant therefore partial degree one belong contradiction converse suppose admits presentation clearly let automorphism proposition see induces isomorphism proves last statement shows finally lin take may apply sides get lin lin since dimension equality must hold ffm remark get alternative proof proposition take ffm use proposition see since linear partials lie leading terms partials involve variables like proof denote ideal generated initial forms elements know annihilator set leading summands partials see discussion beginning section casnati notari also proposition emsalem therefore means element order least two variable occurs may replace time minimal degree grows degree exceeds deg may discard remaining part process eventually ends may assume consider automorphism sends show showing element note family ffm hypotheses proposition polynomial need standard form result gives way adding exotic summands polynomial without changing hilbert function algebra yields also gives following result corollary let kdp kdp polynomial degree assume lin family ffm kdp dimension dim ffm next result shows exotic summands may obtained similar description proposition let polynomial degree choose basis agrees filtration write fst fex fst standard form fex fst order least two deg fst fst deg fst proof applying theorem proposition know exists polynomial standard form order least two claim basis also polynomial see let linear partial order element order usual pairing using rule since order least two implies also know ord ord also linear partial order linear partials partials order proves claim wish show replaced polynomial also standard form need operators consider expression start factoring power variable say power corresponding operator note obviously commutes remaining operators order move also term right side apply rule bernardi jelisiejew macias marques ranestad obtain equality second lines checked hand straightforward even cumbersome computation observe way get extra piece change fact new elements order least two repeating procedure rewrite order least two since terms terms involving variables exotic choice basis agrees standard form know partial degree otherwise would occur leading summand linear partials belong finally suppose deg term degree belongs kdp xna part lower degree also exotic summand terms part exotic summands therefore perform another variable corresponding operator every may replace also standard form written fst example let polynomial degree minimal possible hilbert function symmetric possible symmetric decomposition equal zero vectors theorem see standard form order least two dimk lin choose generator space let decomposition homogeneous summands standard form see may write constants since degree changing pmay assume sum becomes shorter see deg constants polynomial degree two partially depending dimension possible obtained way may chosen element square maximal ideal therefore choice together choice obtain dim spec family apolarity local cactus rank shall apply analysis hilbert functions polynomials apolar subschemes homogeneous form recall denote kdp definition subscheme apolar form homogeneous ideal contained apolarity subscheme form degree may given following natural interpretation terms embedding image lemma apolarity lemma scheme apolar proof apolar get part follows part remains consider case part follows also particularly interested minimal apolar subschemes form length called cactus rank form closure set forms given cactus rank called cactus variety forms although may reducible minimal apolar schemes locally gorenstein aim describe forms given minimal length local gorenstein scheme form minimal apolar scheme decomposes local artinian gorenstein schemes decomposition corresponds additive decomposition form particular cactus variety join varieties forms whose minimal apolar scheme local let form degree consider family hilb subschemes apolar supported construct particular subscheme call natural apolar subscheme element hyperplane complement hyperplane isomorphic space spec moreover get homomorphism corresponding passing homogeneous coordinate ring coordinate ring choose dual bases respectively let let natural dual spaces like coordinate ring space contains point complement hyperplane given polynomial denote subscheme definition let form linear form take subscheme since closed subset construction support used particular coordinate system however simplify following proofs coordinates first note every lifting canonical isomorphism changing necessary bernardi jelisiejew macias marques ranestad may assume may kdp homomorphism sends note induces isomorphism space homogeneous polynomials degree space polynomials degree furthermore dual homomorphism sending let homogeneous polynomial let homogeneous operator denote general polynomials following lemma gives basic relation lemma let homogeneous polynomial let homogeneous operator deg deg let deg deg let tails equal moreover divisible deg proof statement equivalent saying images equal linear space therefore statements linear respect enough prove case monomials let otherwise otherwise consider two cases first suppose conditions equivalent thus images agree next suppose suppose monomial degree thus image zero equal image proof claim second claim note assumption deg thus proof case applies giving corollary let homogeneous polynomials let proof let deg satisfy assumptions lemma therefore since isomorphism corollary let homogeneous polynomial linear form scheme see definition apolar proof take homogeneous form element annihilating corollary remark easy characterize embedding manner similar proof apolarity lemma let subspace linear forms annihilate let symmetric product subspace partials degree linear span given furthermore following lemma private communication jaroslaw lemma local scheme apolar homogeneous polynomial supported exists closed subscheme apolar moreover proof proposition lemma scheme contains closed gorenstein subscheme apolar let polynomial let homogenization divisible deg lemma asserts izg therefore izg since degree follows partial may prove proposition proof proposition lemma may assume homogeneous let lemma polynomial tail clearly thus closed subscheme minimality enough prove scheme apolar let let homogeneous corollary since get thus apolar local cactus rank general cubic surface subsection restrict characteristic first present example quartic polynomial whose cubic tail partials polynomial similar examples play role computation local cactus rank general cubic surface main issue section example let basis thus dimk hand spanned dimk notice appear leading summand partial proposition computation local cactus rank general cubic surface need translate generality assumptions properties partials first note following correspondences singular point hyper surface cone point bernardi jelisiejew macias marques ranestad lemma let general cubic form four variables set quadric rank less irreducible surface degree furthermore quadric rank less exist points quadric rank less cubic surface smooth every nonzero cubic surface eckhardt points plane section cone proof facts classical good recent reference see dolgachev proposition let general smooth cubic form four variables local cactus rank every linear form apolar scheme dehomogenization length defines singular curve section whose tangent cone singular point square length scheme supported apolar proof let kdp general cubic form sense lemma claim hilbert function hfl indeed suppose hfl exists linear form degree one let decomposition homogeneous components since deg divisible quadric rank hand contradicts generality assumption lemma cubic surface one dimensional family plane cuspidal cubic sections many reducible plane sections unions smooth conic tangent line either case tangent cone singular point square pick one plane section linear change coordinates may assume plane section singular tangent cone dehomogenization form polynomials kdp degree three one respectively plane section cubic summand lemma linear partials later use note monomial quadric contains hence since kdp mod also exists clearly let hence lemma conclude local gorenstein scheme apolar claim length hence local cactus rank prove claim showing exotic summand consider partials annihilates take may write shows proposition exotic summand thus hilbert function binary polynomial maximal values function clearly proves claim length finally suppose exists local gorenstein scheme length apolar must polynomial whose cubic tail coincides thus degree least four hilbert function two cases degree leading summand pure power partial cubic summand therefore proportional particular divisible contradicting generality assumption case see part standard form example choice together choice obtain variety possible thus general dimension cactus varieties cubic forms section consider polynomials hilbert function derive lower bounds dimension cactus variety cubic forms respectively cactus variety cactusr third veronese embedding according proposition closure family cubic forms admitting decomposition distinct linear forms forms zgs length dehomogenization cubic tail dehomogenization see get lower bound dimension cactus variety consider extreme opposite higher secants namely linear spaces intersect local scheme particular consider closure cactusr family cubic forms exist linear form form cubic tail polynomial hilbert function analogously using hilbert function case cubic polynomial cubic tail second case quartic polynomial dimension note union varieties isomorphic projectivisation family cubic polynomials hilbert function variety union varieties isomorphic projectivisation tailsr family cubic polynomials tails polynomials hilbert function example kdp hilbert function possible symmetric decomposition bernardi jelisiejew macias marques ranestad therefore deg general cubic polynomial kdp quadratic polynomial hxm standard form exotic summand furthermore subspace lin determined variety subspaces dimension get dim notice furthermore variety cone projectivisation dimension one less example kdp hilbert function possible symmetric decomposition therefore deg general cubic polynomial kdp cubic polynomial form hxm hxm standard form exotic summand furthermore subspaces lin lin determined variety dimension get dim notice since summand quadratic form variety cone projectivisation dimension use examples give lower bound dimension union linear spaces intersect local subscheme proposition let union cactusl linear spans local subschemes length dimension dim local subschemes let union cactusl linear spans length dimension dim cactusl proof clearly cactusl get inequality computing dimension subvarieties let union varies projective varieties whose cones isomorphic dimension dim dim equal right hand side lemma similarly union varies varieties isomorphic dim dim cases right hand side dimension given parametrization variety get equality show parameterization generically one one even show general unique length odd show unique tail quartic polynomial whose apolar scheme zgl length let assume general let local polynomial hilbert function depends variables therefore cone inside hyperplane let point dimensional linear vertex furthermore since partials degree partials particular singular tangent cone rank hand singular tangent cone rank hilbert function assume general let local polynomial hilbert function cubic tail quartic polynomial hilbert function depends variables hidden variable thus singular cubic hypersurface double point whose tangent cone square fact cone linear vertex dimension singular hypersurface cases let general linear forms linear section still singular point whose tangent cone rank linear section still singular cubic hypersurface inside linear subspace non reduced tangent cone singular point proof uniqueness may therefore cases reduced case following classical result bernardi jelisiejew macias marques ranestad lemma set singular cubic hypersurfaces whose tangent cone singular point rank form subvariety codimension general member set exactly one singular point proof note set singular cubic hypersurfaces form hypersurface general point hypersurface discriminant corresponds cubic hypersurface quadratic singularity tangent cone quadric rank quadrics rank quadrics rank form subvariety codimension two codimensions add codimension lemma uniqueness quadric rank point vertex notice bertini theorem general cubic hypersurface tangent cone smooth elsewhere remark notice codimension lemma consistent dimensions proposition get dim case odd show lemma set cubic hypersurfaces singular hyperplane section whose tangent cone singular point square form subvariety codimension general member set exactly one hyperplane section proof assume general cubic dimension singular hyperplane section whose tangent cone singular point square let singular point tangent cone may choose coordinates kdp cubic form singular hyperplane section kdp thus depends varies dimensional variety get cubics singular hyperplane section whose tangent cone singular point square form variety codimension codimension positive forms vary linear system cubic hypersurfaces base locus supported general member smooth tangent hyperplane section singular tangent cone square hyperplane section unique property another point distinct tangent hyperplane section also property count dimensions two consider space smooth cubic hypersurfaces whose tangent hyperplanes whose tangent cones squares support along respectively notice distinct may equal gives two cases dimension count similar dimension count show variety cubic hypersurfaces two special points positive codimension variety cubics one point therefore last statement lemma follows remark notice codimension lemma consistent dimension proposition get dim conclude parameterization birational even odd hence dimension formulas proposition dimensions respectively rewrite formulas dimensions terms lengths resp even dim odd corollary dim cactusr even odd possible hilbert function local schemes length one may variety analogous dimensions varieties general known remains obstacle precise dimension cactus variety cactusr finally leave open question know cactus rank general cubic surface equals rank local cactus rank see proposition know whether larger number variables local cactus rank cactus rank agree question cactus rank cubic form kdp always computed locally cactus rank least acknowledgements thank jaroslaw fruitful discussions homing plus programme foundation polish science european union regional development fund partial support mutual visits pmm thanks anthony iarrobino jerzy weyman invitation hospitality northeastern university grateful anthony iarrobino introduced subject fruitful discussions also thanks james adler help language partially supported project galaad inria sophia antipolis france marie curie fellowships carrer development deconstruct gnsaga indam mathematical department giuseppe peano turin italy politecnico turin italy doctoral fellow warsaw center mathematics computer science polish program know polish national science center project member computational complexity generalised waring type problems tensor decompositions project within canaletto executive program technological cooperation italy poland pmm partially supported para tecnologia projects geometria portugal comunidade portuguesa geometria bernardi jelisiejew macias marques ranestad sabbatical leave grant cima centro universidade projects amparo pesquisa estado paulo grant supported rcn project special geometries references alexander hirschowitz alexander james hirschowitz polynomial interpolation several variables alg bernardi brachat mourrain bernardi alessandra brachat mourrain bernard comparison different notions ranks symmetric tensors linear algebra applications bernardi ranestad bernardi alessandra ranestad kristian cactus rank cubics forms symbolic comput weronika jaroslaw secant varieties high degree veronese reembeddings catalecticant matrices smoothable gorenstein schemes algebraic geom casnati notari casnati gianfranco notari roberto irreducibility singularities gorenstain locus punctual hilbert scheme degree pure appl algebra casnati notari casnati gianfranco notari roberto structure theorem gorenstein algebras apear commut algebra casnati jelisiejew notari casnati gianfranco jelisiejew joachim notari roberto irreducibility gorenstein loci hilbert schemes via ray families algebra number theory dolgachev dolgachev igor classical algebraic geometry modern view cambridge university press emsalem emsalem jacques des points bull soc math france iarrobino iarrobino anthony associated graded algebra gorenstein artin algebra mem amer math soc amer math soc providence iarrobino kanev iarrobino anthony kanev vassil power sums gorentein algebras determinantal loci lecture notes mathematics berlin heidelberg new york jelisiejew jelisiejew joachim classifying local artinian gorenstein algebras macaulay macaulay francis sowerby properties enumeration theory modular systems proc london math soc dipartimento matematica trento via sommarive povo trento italy address faculty mathematics informatics mechanics university warsaw banacha warszawa poland address jjelisiejew departamento escola tecnologia centro instituto universidade rua ramalho portugal address pmm matematisk institutt universitetet oslo box blindern oslo norway address ranestad
0
nov using probabilistic graphical models solving combinatorial optimization problems murilo zangari roberto aurora trinidad ramirez alexander dinf federal university cep curitiba brazil intelligent systems group university basque country san spain email auroratrinidad popular strategies pareto dominance based indicator based iii decomposition based also called scalarization function based zhang proposed decomposition based algorithm called evolutionary algorithm based decomposition framework decomposes mop number scalar optimization subproblems optimizes simultaneously collaborative manner using concept neighborhood subproblems current research various include extension algorithms continuous mops complicated pareto sets optimization problems methods parallelize algorithm incorporation preferences search automatic adaptation weight vectors new strategies selection replacement balance convergence diversity hybridization local searches procedures etc however shown traditional operators fail properly solve problems certain characterindex optimization istics present problem deception main reason shortcoming algorithms bilistic graphical models deceptive functions edas consider dependencies variables problem address issue evolutionary algorithms ntroduction instead using classical genetic operators incorporate several problems stated machine learning methods proposed algooptimization problems mops two rithms usually referred estimation distribution jectives optimized often objectives conflict algorithms edas edas collected information repwith therefore single solution optimize resented using probabilistic model later employed objectives time pareto optimal solutions generate sample new solutions practical interest decision makers selecting edas based modeling statistical dependencies befinal preferred solution mops may many even tween variables problem proposed infinite optimal solutions time consuming way encode exploit regularities complex impossible obtain complete set optimal solutions problems edas use expressive probabilistic since early nineties much effort devoted models general called probabilistic graphical develop evolutionary algorithms solving mops models pgms pgm comprises graphical compoobjective evolutionary algorithms moeas aim finding nent representing conditional dependencies set representative pareto optimal solutions single run variables set parameters usually tables marginal conditional probabilities additionally analysis different strategies used criteria maintain population optimal solutions pareto set exist algorithms nsgaiii combine ideas consequently finding approximated pareto front decomposition based pareto dominance based algorithms based modeling statistical dependencies interactions variables proposed solve wide range complex problems algorithms learn sample probabilistic graphical models able encode exploit regularities problem paper investigates effect using probabilistic modeling techniques way enhance behavior framework decomposition based evolutionary algorithm decomposes optimization problem mop number scalar subproblems optimizes collaborative manner framework widely used solve several mops proposed algorithm using probabilistic graphical models able instantiate univariate probabilistic models subproblem validate introduced framework algorithm experimental study conducted version deceptive function results show variant framework tree models learned matrices mutual information variables able capture structure problem able achieve significantly better results using genetic operators using univariate probability models terms approximation true pareto front graphical components pgms learned search provide information problem structure moedas using pgms also applied solving different mops specified literature number algorithms integrate different extents idea probabilistic modeling previously proposed usually algorithms learn sample probabilistic model subproblem use univariate models able represent dependencies variables paper investigate effect using probabilistic modeling techniques way enhance behavior propose framework able instantiate different pgms general framework called graphical models goals paper introduce class algorithms learn sample probabilistic graphical models defined discrete domain empirically show improve results traditional deceptive functions iii investigate influence modeling variables dependencies different metrics used estimate quality pareto front approximations show first time evidence problem structure captured models learned paper organized follows next section relevant concepts used paper introduced sections iii respectively explain basis edas used paper section introduces explains enhancements required edas order efficiently work within context section discuss class functions focus paper deceptive functions explain rationale applying probabilistic modeling functions related work discussed detail section vii experiments empirically show behavior described section viii conclusions paper possible trends future work presented section reliminaries let denote vector discrete random variables used denote assignment variables population represented set vectors size population similarly xij represents assignment jth variable ith solution population general mop defined follows minimize maximize subject decision variable vector size decision space consists objective functions objective space pareto optimality used define set optimal solutions pareto let said dominate least one solution called pareto optimal dominates set pareto optimal solutions called pareto set solutions mapped objective space called pareto front many real life applications great interest decision makers understanding nature different objectives selecting preferred final solution iii decomposes mop number scalar single objective optimization subproblems optimizes simultaneously collaborative manner using concept neighborhood subproblems subproblem associated weight vector set weight vectors several decomposition approaches used weighted sum tchebycheff two common approaches used mainly combinatorial problems let weight vector weighted sum approach optimal solution following scalar problem defined minimize maximize subject use emphasize weight vector objective function pareto optimal solution convex concave tchebycheff approach optimal solution following scalar problems defined minimize max subject reference point max pareto optimal solution exists weight vector optimal solution optimal solution pareto optimal neighborhood relation among subproblems defined neighborhood subproblem defined according euclidean distance weight vector weight vectors relationship neighbor subproblems used selection parent solutions replacement old solutions also called update mechanism size neighborhood used selection replacement plays vital role exchange information among subproblems moreover optionally external population used maintain pareto optimal solutions found definition domination minimization maximization inequalities reversed search algorithm adapted presents pseudocode general serves basis paper algorithm general framework initialization initial population size set respectively neighborhood size selection replacement external population termination condition met subproblem generation variation evaluate using fitness function return initialization weight vectors set euclidean distance two weight vectors computed subproblem set neighbors selection step update step initialized closest neighbors respectively according euclidean distance initial population generated random way corresponding fitness function computed external pareto initialized solutions initial population variation reproduction performed using generate new solution conventional algorithm selects two parent solutions applies crossover mutation generate update population step decides subproblems updated itr current solutions subproblems replaced better aggregation function value update step removes solutions dominated adds solution dominates improvements framework wang reported different problems need different diversity convergence controlled different mechanisms parameters algorithm far proposed versions adopted scheme selects individuals population generates offspring different strategies selection replacement proposed paper replacement proposed used strategy maximal number solutions replaced new solution bounded set much smaller replacement mechanism prevents one solution many copies population stimation distribution algorithms edas stochastic optimization algorithms explore space candidate solutions sampling probabilistic model constructed set selected solutions found far usually edas population ranked according fitness function ranked population subset promising solutions selected selection operator truncation selection truncation threshold algorithm constructs probabilistic model attempts estimate probability distribution selected solutions according probabilistic model new solutions sampled incorporated population entirely replaced algorithm stops termination condition met number generations work positive distributions denoted denotes marginal probability denotes conditional probability distribution given set selected promising solutions represented algorithm presents general eda procedure algorithm general eda generate solutions randomly termination condition met solution compute fitness function select promising solutions build probabilistic model solutions generate new candidate solutions sampling add way learning sampling components algorithm implemented also critical performance computational cost next section univariate probabilistic models introduced univariate probabilistic models univariate marginal distribution univariate probabilistic model variables considered independent probability solution product univariate probabilities variables one simplest edas use univariate model univariate marginal distribution algorithm umda umda uses probability vector probabilistic model denotes univariate probability associated corresponding discrete value paper focus binary problems learn probability vector problems set proportion selected population generate new solutions variable independently sampled random values assigned variables following corresponding univariate distribution incremental learning pbil like umda uses probabilistic model form probability vector initial probability position set probability vector updated using selected solution variable corresponding entry probability vector updated learning rate specified user prevent premature convergence position probability vector slightly varied generation based mutation rate parameter recently acknowledged implementations probabilistic vector update mechanism pbil fact different produce important variability behavior pbil univariate approximations expected work well functions additively decomposed functions order one also non additively decomposable functions beqsolved edas use univariate models able capture relevant bivariate dependencies problem variables details edas use tree models obtained details use pgms probabilistic modeling edas obtained optimal mutation rate edas edas use kind stochastic mutation however certain problems lack diversity population critical issue cause edas produce poor results due premature convergence remedy problem authors propose use bayesian priors effective way introduce mutation operator umda bayesian priors used computation univariate probabilities way computed probabilities include effect paper use bayesian prior natural way introduce models mutation following strategy different edas use univariate models described two possibilities estimate probability edas assume dependencies decision vari head biased coin estimate ables case probability distribution represented counts number occurrences case times pgm head throws probability estimated models pgms capable capture pairwise interactions variables tree model bayesian approach assumes probability head conditional probability variable may depend biased coin determined unknown parameter starting priori distribution parameter using one variable parent tree rule univariate probability computed probability distribution conformal bayesian called hyperparameter chosen tree model defined advance relate bayesian prior mutation authors used following theorem binary variables pectation value probability using bayesian prior parent tree parameter mutation mutation rate root using maximum likelihood estimate tree bivariate marginal distribution algorithm bmda prov algorithm posed uses model based set mutually graphical models independent trees forest iteration algorithm algorithm shows steps proposed tree model created sampled generate new candidate solutions based conditional probabilities learned framework first initialization procedure generates initial solutions external pareto initialized population algorithm proposed combines solutions within main tures algorithms introduced case termination criteria met subproblem probabilistic model learned using base population eda learning procedure works follows step compute univariate bivariate marginal solutions neighborhood sampling quencies pjk using set selected procedure used generate new solution model new solutions used update parent population promising solutions according mechanism finally new step calculate matrix mutual information using solution used update described algorithm univariate bivariate frequencies simple way use probabilistic models step calculate maximum weight spanning tree learning sampling probabilistic model scalar mutual information compute parameters subproblem using set closest neighbors selected model population therefore generation idea computing maximum weight spankeeps probabilistic models ning tree matrix mutual information algorithm edas presented previous section umda pbil iii instantiated convenience notation set mutually independent trees framework also genetic operators crossover refereed tree algorithm initialization initial population size randomly generated set respectively neighborhood size selection replacement external population termination condition met subproblem generation learning case choose two parent solutions case umda learn probabilistic vector using solutions case pbil learn incremental probabilistic vector using solutions case learn tree model pit using solutions sampling try times sample new solution different solution case apply crossover mutation generate case pbil umda sample using probability vector case sample using tree model pit compute fitness function return mutation used standard applied using learning sampling procedures general allows introduction types probabilistic graphical models pgms like bayesian networks markov networks different classes pgms also used scalar subproblem consider particular scenario analysis presented moreover introduces particular features explained next section learning models complete neighborhood usually edas population sizes large size selected population high population size subproblem set neighbors plays similar role selected therefore main difference edas presented section former instead selecting subset individuals based fitness keep unique probabilistic model probabilistic model computed neighborhood solutions scalar subproblem diversity preserving sampling preliminary experiments detected one cause early convergence algorithm solutions already population newly sampled sampling solutions already present population also detrimental terms efficiency since solutions evaluated way avoid situation added simple procedure new sampled solution tested presence neighborhood subproblem new sampled solution equal parent solution algorithm discards solution samples new one different solution found usually size neighborhood although propose low probability select two parent solutions complete population maximum number trials reached procedure specially suitable deal expensive fitness functions maximum number tries specified user call sampling incorporates verification procedure diversity preserving sampling applied emphasize algorithm called ulti objective eceptive ptimization roblem exists class scalable problems difficulty given interactions arise among subsets decision variables thus problems require algorithm capable linkage learning identifying exploring interactions decision variables provide effective exploration decomposable deceptive problems played fundamental role analysis eas mentioned one advantages edas use probabilistic graphical models capacity capture structure deceptive functions different works literature proposed edas solving decomposable deceptive problems one example class decomposable deceptive functions fixed number variables subsets also called partitions building blocks traps deceive algorithm away optimum interactions variables partition considered according standard crossover operators genetic algorithms fail solve traps unless bits partition located close chosen representation pelikan used version analyzing behavior hierarchical boa hboa functions equation equation consist evaluating vector decision variables positions divided disjoint subsets partitions bits assumed multiple partition fixed entire optimization run algorithm given information partitioning advance bits partition contribute trap order using following functions number building blocks number ones input string bits problem conflicting thus one single global optimum solution set pareto optimal solutions moreover amount possible solutions grows exponentially problem size previously investigated pareto based edas vii elated work section review number related works emphasizing differences work presented paper besides find previous report decomposition approaches incorporate pgms solving combinatorial mops using univariate edas estimation distribution algorithm based decomposition proposed solving traveling salesman problem mtsp subproblem uses matrix represent connection strength cities best solutions found far matrix combined priori information distances cities problem new matrix represents probability sth tth cities connected route ith although matrix encodes set probabilities relevant corresponding tsp subproblem matrices considered pgms since comprise graphical component representing dependencies problem type updates applied matrices related parametric learning structural learning done pgms furthermore type models resemble class structures traditionally used aco heavily depend use prior information case incorporation information distances cities therefore consider member class algorithms another approach combining use probabilistic models moead mtsp presented paper univariate model used encode probabilities city tsp assigned possible positions permutation therefore model represented matrix dimension comprising univariate probabilities city configurations position one main difference hybrid approach proposal zhou single matrix learned using solutions therefore information contained univariate model combines information subproblems disregarding potential regularities contained local neighborhoods furthermore since sampling process matrix take account constraints related permutations repair mechanisms penalty functions used correct infeasible solutions consequence much information sampled model solution modified application repair mechanism previous applied solving mops application edas solving permutation problems increasing interest number edas also specifically designed deal problems framework proposed paper investigated binary problems however edas deal permutation space incorporated framework future authors proposed univariate solving binary knapsack problem mokp uses adaptive operator sampling step preserve diversity prevents learned probability vector premature convergence therefore sampling step depends univariate probabilistic vector extra parameter using multivariate edas algorithm proposed solve optimization problems proposed framework involves two ingredients concept called generalized decomposition decision maker guide underlying search algorithm toward specific regions interest entire pareto front eda based statistics namely method applied set continuous functions obtained results showed proposed algorithm competitive standard class statistics used normal univariate models limit ability algorithm capture represent interactions variables univariate densities updated using updating rule one originally proposed pbil algorithm covariance matrix adaptation evolution strategy used probabilistic model although introduced developed context evolutionary strategies learns gaussian model search covariance matrix learned cmaes able capture dependencies variables however nature probabilistic modeling continuous domain different one discrete domain methods used learning sampling models different furthermore stated main purpose work presented investigate extent cmaes could appropriately integrated benefits one could obtain therefore emphasis put particular adaptations needed efficiently learn sample model different context since adaptations essentially different ones required discrete edas used paper contributions different pelikan discussed decomposable problems difficulty authors attempted review number mixturebased iterated density estimation algorithm mmidea mixed bayesian optimization algorithm mmboa hierarchical boa mohboa moreover authors introduced improvement mohboa algorithm combines three ingredients viii xperiments hierarchical bayesian optimization algorithm hboa proposed framework imple concepts nsgaii iii mented comparison study four different instanclustering objective space experimental study tiations used convenience called algorithm showed mohboa efficiently solved instantiated problems large number competing pbil building blocks algorithm capable effective reto evaluate algorithms deceptive functions combination building sampling bayesian networks section used compose bidecision trees significantly outperformed algorithms objective trap problem different number variables standard variation operators problems require effective equation defines notation linkage learning function covered use concepts algorithms nsgaii since past years framework rap one major frameworks design moeas subject incorporating probabilistic graphical models seems promising technique solve scalable deceptive performance metrics problems martins proposed new approach solvthe true known therefore two perforing decomposable deceptive problems mance metrics used evaluate algorithms performance called uses probabilistic model based inverted generational distance metric igd phylogenetic tree tested number true pareto optimal solutions found deceptive functions algorithm generations stop condition outperformed mohboa terms number algorithms generations tion evaluations achieve exact specially let set uniformly distributed pareto optimal problem question increased size solutions along true objectives question discussed probabilistic model approximated set true obtained algorithm identify correct correlation variables inverted generational distance problem combination improbable values variables defined avoided however model becomes pressive computational cost incurred algorithm igd build model also grows thus efficiency algorithm building models minimum euclidean distance accuracy proposal generation solutions large enough igd probabilistic graphical models kept therefore higher could measure convergence diversity sense number subproblems drawback proposal lower igd better approximation terms efficiency time consuming however true proposed adequate commitment number true pareto optimal solutions number efficiency accuracy approximated pareto solutions composes belongs expected behave satisfactorily defined contributions respect previous work summarize main contributions work cardinality respect related research also use statistical test rank use first time probabilistic graphical model algorithms according results obtained metrics within solve combinatorial mops two algorithms achieve rank means case previous incorporate significant difference tic models cover univariate models investigate particular class problems deceptive parameters settings mops exist extensive evidence following parameters settings used expericonvenience using probabilistic graphical models mental study introduced class probabilistic models structure model learned number subproblems neighborhood solution algorithms proposed literature investigate question problem innumber subproblems correspondent weight vectors controlled parameter teractions kept scalarized subproblems interactions translated probabilistic makes wide spread distribution weight models vectors according thus table results average igd average number true pareto optimal solutions computed form runs combination problem size algorithm preserving sampling column total number true pareto solutions problem average number true pareto optimal solutions diversity preserving sampling standard sampling average igd measure igd table kruskall wallis statistical ranking test according igd combination problem size algorithm preserving sampling first value average rank runs runs algorithms second value brackets final ranking two ranks means significant difference ranking number true pareto optimal solutions standard sampling diversity preserving sampling ranking igd measure average true pareto solutions average true pareto solutions standard sampling diversity preserving sampling standard sampling diversity preserving sampling generations generations fig average true pareto optimal solutions generations chart shows comparison standard sampling diversity preserving sampling using algorithm problem set consequently neighborhood size number selected solutions crucial edas test range neighborhood size values also maximal number replacements new solution scalarization function applied weighted sum tchebycheff approaches achieve similar results results tchebycheff presented paper maximum number tries diversity preserving pling procedure set parameter value neighborhood size genetic operators uniform crossover mutation pbil learning rate combination algorithm parameters setting independently run times problem size comparison different variants first algorithms evaluated using fixed neighborhood size value table presents average values igd number true pareto solutions computed using approximated found algorithms average true pareto solutions average true pareto solutions generations generations fig average true pareto solutions generations problem size average igd average igd neighborhood size neighborhood size fig average igd different neighborhood sizes selection table presents results test explanation behavior potential advantageous effect significance level applied results obtained introduced clustering solutions determined algorithms results summarized table framework benefit would independent best rank algorithm shown bold figure type models used represent solutions grouping show behavior algorithms throughout generations similar solutions neighborhood may allow univariate according average obtained algorithm produce global optima even functions analysis results extract following deceptive conclusions according table uses influence neighborhood size selection diversity preserving mechanism learning algorithm achieved best rank neighborhood size direct impact search cases according indicators moreover diversity ability balance convergence preserving sampling improved behavior diversity target problem figure presents igd algorithms mainly values obtained different neighborhood sizes terms quality approximation true figure mentioned number selected solutions confirms results therefore diversity preserving impact edas eda sampling positive effect algorithms needs large set selected solutions able learn problem algorithms able achieve dependencies decision variables fact diverse set solutions additionally figure shows may explain see figure differences achieves better results algorithms lower neighborhood size becomes smaller algorithms first generations remarkable point results moreover algorithms find least one global small better solve algorithms except optimal solution true pareto solution one possible achieves good results matrix matrix subproblem subproblem matrix matrix subproblem subproblem matrix matrix subproblem subproblem fig frequency matrices heat maps learned subproblems problem size large neighborhood size even neighborhood size important influence behavior algorithm parameter seen isolation parameters also influence behavior algorithm analyzing structure problems captured tree model one main benefits edas capacity reveal priori unknown information problem structure although question extensively studied domain analysis multiobjective domain still therefore relevant question determine extent structure problem captured probabilistic models used relevant question since clue types interactions could captured models learned scalarized functions section structures learned solving different subproblems investigated generation subproblem tree model built according probabilities obtained selected population represent tree model matrix position mjk represents relationship pairwise two variables mjk parent tree model learned otherwise mjk figure represents merge frequency matrices obtained runs frequencies represented using heat maps lighter colors indicate higher frequency structure objective functions analysis plotted merge matrices learned extreme conducted considering strucsubproblems middle subproblem tures learned different neighborhood sizes different using problem size results subproblems found even relatively small previous section frequency matrices two neighborhood neighborhoods used comparison standard sizes presented population sizes used edas models able capture matrices clearly show strong relationship interactions functions one potential application subsets variables building blocks size shows finding could reuse transfer models algorithm able learn structure subproblems similar way application structural functions notice neighborhood size transfer related problems able capture accurate structures scalability probexalts good results found accordance igd lem deceptive functions metric explained fact higher investigated appropriate model population size reduces number spurious correlations able learn higher order interactions order learned data therefore pgms based bayesian markov networks moreover analyzing different scalar subproblems could application case see algorithm able learn structure even directions future work conceive strategies middle scalar subproblem two conflict avoid learning model subproblem would objectives functions compete every improve results terms computational cost use partition building block decomposable problem probable configurations model speed convergence iii consider application hybrid schemes onclusions future work incorporating local search take advantage informain paper novel general framework able tion learned models evaluate instantiate probabilistic graphical models named benchmarks deceptive mops introduced pgms used obtain comprehensible representation search space consequently acknowledgments algorithms incorporate pgms provide model work received support cnpq productivity expressing regularities problem structure well grant nos program science without boras final solutions pgm investigated paper ders nos capes brazil government takes account interactions variables program basque government maximum weight spanning tree spanish ministry science innovation probabilities distributions experimental study biobjective version well known deceptive function bir eferences conducted terms accuracy approximating coello lamont van veldhuizen evolutionary algoof true results shown instantiation rithms solving ser genetic called significantly evolutionary computation berlin heidelberg springer zhang multiobjective evolutionary algorithm better uses univariate edas traditional based decomposition ieee transactions evolutionary compugenetic operators tation vol moreover enhancement introduced wang zhang maoguo aimin replacement strategy balalancing convergence diversity proceedings new simple effective mechanism sampling congress evolutionay computation cec ieee proposed called diversity preserving sampling since sampling solutions already present neighbor solutions deb jain evolutionary optimization algorithm using nondominated sorting approach part detrimental effect terms convergence solving problems box constraints ieee transaction evolutionary diversity preserving sampling procedure tries generating computation vol solutions different parent solutions deb zhang kwong evolutionary manyobjective optimization algorithm based dominance decomposiing performance indicators algorithms ieee trans evolutionary computation comparison improved results terms diversity zhang multiobjective optimization problems approximation complicated pareto sets ieee transaction evolutionary computation analysis influence neighborhood size behavior algorithms conducted general ishibuchi akedo nojima study specification scalarizing function knapsack increasing neighborhood size detrimental effect proceedings international learning intelligent optimization conference lion ser lecture notes computer although always case science vol springer also independent type models used represent tan jiao wang uniform solutions grouping similar solutions neighborhood design new version optimization problems many objectives computers operations research vol may allow produce global optima even funcpp tions deceptive nebro durillo study parallelization finally also investigated first time metaheuristic learning intelligent optimization springer extent models learned capture pilat neruda incorporating user preferences coevolution weights proceedings genetic evolutionary computation conference gecco acm liu jiao sun adaptive weight adjustment evolutionary computation vol alhindi zhang tabu search multiobjective permutation flow shop scheduling problems proceedings ieee congress evolutionary computation cec beijing china july ieee zhang battiti hybridization decomposition local search multiobjective optimization ieee cybernetics vol karshenas bielza santana review probabilistic graphical models evolutionary computation journal heuristics vol lozano inza bengoetxea towards new evolutionary computation advances estimation distribution algorithms springer recombination genes estimation distributions binary parameters parallel problem solving nature ppsn ser lectures notes computer science voigt ebeling rechenberg schwefel vol berlin springer pelikan sastry goldberg multiobjective estimation distribution algorithms scalable optimization via probabilistic modeling algorithms applications ser studies computational intelligence pelikan sastry eds springer berlanga molina optimization adaptive resonance estimation distribution algorithm comparative lion ser lecture notes computer science coello vol springer karshenas santana bielza optimization based joint probabilistic modeling objectives variables ieee transactions evolutionary computation vol shim tan tan hybrid estimation distribution algorithm solving multiple traveling salesman proceedings congress evolutionary computation cec ieee zhou gao zhang decomposition based estimation distribution algorithm multiobjective traveling salesman computers mathematics applications vol wang hua yuan scale adaptive reproduction operator decomposition based estimation distribution algorithm proceedings congress evolutionary computation cec ieee giagkiozis purshouse fleming generalized decomposition cross entropy methods inf vol derbel liefooghe brockhoff aguirre tanaka injecting proceedings genetic evolutionary computation conference gecco acm giagkiozis purshouse fleming generalized decomposition international conferenceevolutionary optimization emo ser lecture notes computer science vol springer pelikan analysis epistasis correlation landscapes nearest neighbor interactions proceedings conference genetic evolutionary computation gecco acm equation response selection use prediction evolutionary computation vol baluja incremental learning method integrating genetic search based function optimization competitive learning carnegie mellon university pittsburgh tech zangari santana pozo mendiburu pbils unveiling different learning mechanisms pbil variants submmitted publication mahnig ochoa schemata distributions graphical models evolutionary optimization journal heuristics vol pelikan bivariate marginal distribution algorithm advances soft computing engineering design manufacturing roy furuhashi chawdhry eds london springer santana ochoa soto mixture trees factorized distribution algorithm proceedings genetic evolutionary computation conference gecco san francisco morgan kaufmann publishers baluja davies using optimal combinatorial optimization learning structure search space proceedings international conference machine learning fisher mahnig muhlenbein optimal mutation rate using bayesian priors estimation distribution saga ser lecture notes computer science steinhofel vol springer pelikan goldberg boa bayesian optimization algorithm proceedings genetic evolutionary computation conference gecco vol orlando morgan kaufmann publishers san francisco shakya santana markov networks evolutionary computation springer goldberg simple genetic algorithms minimal deceptive problem genetic algorithms simulated annealing davis london pitman publishing deb goldberg analyzing deception trap functions university illinois illinois genetic algorithms laboratory urbana illigal report sastry abbass goldberg johnson substructural niching estimation distribution algorithms proceedings conference genetic evolutionary computation gecco acm echegoyen lozano santana exact bayesian network learning estimation distribution algorithms proceedings congress evolutionary computation cec ieee press echegoyen mendiburu santana lozano toward understanding edas based bayesian networks quantitative analysis ieee transactions evolutionary computation vol pelikan sastry goldberg multiobjective hboa clustering scalability university illinois illinois genetic algorithms laboratory urbana illigal report february martins soares vargas delbem phylogenetic algorithm solving decomposable deceptive problems evolutionary optimization springer pelikan hierarchical bayesian optimization algorithm toward new generation evolutionary algorithms ser studies fuzziness soft computing springer vol ceberio irurozki mendiburu lozano review estimation distribution algorithms combinatorial optimization problems progress artificial intelligence vol ceberio mendiburu lozano ranking model optimization problems proceedings congress evolutionary computation cec ieee ieee press botev kroese rubinstein ecuyer cross entropy method unified approach combinatorial optimization simulation mathematical zhang zhou jin regularity model based multiobjective estimation distribution algorithm ieee transactions evolutionary computation vol hansen ostermeier adapting arbitrary normal mutation distributions evolution strategies covariance matrix adaptation proceedings ieee international conference evolutionary computation thierens bosman optimization iterated density estimation evolutionary algorithms using mixture models evolutionary computation probabilistic graphical models proceedings third symposium adaptive systems cuba havana cuba march laumanns ocenasek bayesian optimization algorithm optimization proceedings parallel problem solving nature conference ppsn ser lecture notes computer science vol springer khan bayesian optimization algorithms hierarchically difficult problems master thesis university illinois illinois genetic algorithms laboratory urbana deb thiele laumanns zitzler scalable multiobjective optimization test problems proceedings congress evolutionary computation cec vol ieee press zitzler thiele multiobjective evolutionary algorithms comparative case study strength pareto approach ieee transactions evolutionary computation vol derrac garcia practical tutorial use nonparametric statistical tests methodology comparing evolutionary swarm intelligence swarm evolutionary computation vol brownlee mccall pelikan influence selection structure learning markov network edas empirical study proceedings genetic evolutionary computation conference gecco acm santana lozano protein folding simplified models estimation distribution algorithms ieee transactions evolutionary computation vol fritsche strickler pozo santana capturing relationships optimization proceedings brazilian conference intelligent systems bracis natal brazil accepted publication santana bielza lozano mining probabilistic models learned edas optimization problems proceedings genetic evolutionary computation conference gecco acm santana mendiburu lozano structural transfer using edas application tagging snp selection proceedings congress evolutionary computation cec ieee press best paper award congress evolutionary computation
9
towards automatic resource bound analysis ocaml jan hoffmann ankush das carnegie mellon university weng yale university november abstract article presents resource analysis system ocaml programs system automatically derives resource bounds polymorphic programs inductive types technique parametric resource derive bounds time memory allocations energy usage derived bounds multivariate resource polynomials functions different size parameters depend standard ocaml types bound inference fully automatic reduced linear optimization problem passed solver technically analysis system based novel multivariate automatic amortized resource analysis aara builds existing work linear aara programs inductive types multivariate aara programs lists binary trees first time possible automatically derive polynomial bounds functions polynomial bounds depend inductive types moreover analysis handles programs side effects even outperforms linear bound inference previous systems time preserves expressivity efficiency existing aara techniques practicality analysis system demonstrated implementation integration inria ocaml compiler implementation used automatically derive resource bounds functions lines code derived ocaml libraries compcert compiler implementations textbook algorithms case study system infers bounds number queries sent ocaml programs dynamodb commercial nosql cloud database service introduction quality software crucially depends amount resources time memory required execution statically understanding controlling resource usage software continues pressing issue software development performance bugs common among bugs difficult detect large software systems plagued performance problems moreover many security vulnerabilities exploit space time usage software developers would greatly profit information specifications software libraries interfaces automatic warnings potentially usage code review information particularly relevant contexts mobile applications cloud services resources limited resource usage major cost factor towards automatic resource bound analysis ocaml recent years seen fast progress developing frameworks statically reasoning resource usage programs many advanced techniques imperative integer programs apply abstract interpretation generate numerical invariants obtained sizechange information forms basis computation actual bounds loop iterations recursion depths using counter instrumentation ranking functions recurrence relations abstract interpretation automatic resource analysis techniques functional programs based sized types recurrence relations amortized resource analysis despite major steps forward still many obstacles overcome make resource analysis technologies available developers one hand typed functional programs particularly automatic analysis since use pattern matching recursion often results relatively regular code structure moreover types provide detailed information shape data structures hand existing automatic techniques programs infer linear bounds furthermore techniques derive polynomial bounds limited bounds depend predefined lists binary trees integers finally resource analyses functional programs implemented custom languages supported mature tools compilation development goal long term research effort overcome obstacles developing resource aware raml version functional programming language ocaml raml based automatic amortized resource analysis aara derives multivariate polynomials functions sizes inputs paper report three main contributions part effort present first implementation aara integrated industrialstrength compiler develop first automatic resource analysis system infers multivariate polynomial bounds depend size parameters complex data structures present first aara infers polynomial bounds functions techniques develop tied particular resource parametric resource interest raml infers tight bounds many complex example programs sorting algorithms complex comparison functions dijkstra algorithm common functions sequences nested maps folds technique naturally compositional tracks size changes data across function boundaries deal amortization effects arise instance use functional queue local inference rules generate linear constraints reduce bound inference linear program solving despite deriving polynomial bounds ensure compatibility ocaml syntax reuse parser type inference engine inria ocaml compiler extract syntax tree perform resource preserving code transformations actual analysis precisely model evaluation ocaml introduce novel operational semantics makes efficient handling function closures inria compiler explicit semantics complemented new type system refines function types express wide range bounds introduce novel class multivariate resource polynomials map data given type number set multivariate resource polynomials available bound inference depends types input data parametric integers lengths lists number particular nodes inductive overview data type special case resource polynomial contain conditional additive factors novel multivariate resource polynomials substantial generalization resource polynomials previously defined lists binary trees deal realistic ocaml code develop novel multivariate aara handles functions end draw inspirations multivariate aara programs linear aara programs however new solution combination existing techniques instance infer linear bounds curried append function lists possible previously moreover address specifics inria ocaml compiler evaluation order function arguments efficiently avoid creation performed experiments lines ocaml code still support language features ocaml thus straightforward automatically analyze complete existing applications however automatic analysis performs well code uses supported language features instance applied raml ocaml standard list library raml automatically derives bounds functions derived bounds asymptotically tight also easy develop analyze real ocaml applications keep current capabilities system mind section present case study automatically bound number queries ocaml program issues amazon dynamodb nosql cloud database service bounds interesting since amazon charges dynamodb users based number queries made database experiments easily reproducible source code raml ocaml code experiments interactive web interface available online overview describe technical development give short overview challenges achievements work example bound analysis running example demonstrate user interaction raml figure contains example bound analysis ocaml code figure serve running example article function abmap polymorphic map function list contains acons bcons nodes takes two functions arguments applies data stored data stored function asort takes comparison function contain lists uses quicksort code quicksort also automatically analyzed available online sort lists left unchanged function asort variation asort raises exception encounters list derive resource bound raml user needs pick maximal degree search space polynomials resource metric example analysis figure picked degree steps metric counts number evaluation steps semantics seconds raml reports bound functions shown output excerpt case derived bounds tight sense inputs every size exactly result reported number evaluation steps derived bound abmap raml assumes resource cost get linear bound case asort derive bound quadratic maximal length lists stored every towards automatic resource bound analysis ocaml type ablist let acons ablist bcons ablist nil rec abmap abs match abs acons abs acons abmap abs bcons abs bcons abmap abs nil nil let asort abs abmap quicksort fun abs let asort abs abmap quicksort fun raise abs let btick abmap fun fun excerpt raml output analyzing evaluation steps run time simplified bound abmap simplified bound asort simplified bound asort number component argument number component argument maximal number component argument figure function abmap serve running example article deriving linear bound abmap assume arguments resource consumption abmap applied concrete functions like asort asort cost concrete application bounded acons node contribute cubic cost bound asort moreover number bcons nodes contribute linear factor asort list plus additional linear factor also depends number simply traversed asort linear factor depends number raml automatically deduces traversal aborted case encounter tick metric used derive bounds user defined metrics instructive example function btick tick metric raml derives bound number argument list tight bound sum ticks executed evaluation btick ticks also negative express resources become available note raml make guarantees precision derived bounds since bound proves termination bound analysis undecidable problem many functions raml derive bound either polynomial bound exists analysis able find bound cases raml terminates message like bound abmap could overview currying function closures currying function closures pose challenge automatic resource analysis systems addressed past see assume want design type system verify resource usage consider example curried append function type append list list list ocaml first glance might say time complexity append length first argument closer inspection definition append reveals gross simplification fact complexity partial function call append constant moreover complexity function length argument length list captured function closure aware existing approach automatically derive time bound curried append function example previous aara systems would fail without deriving bound general describe resource consumption curried function expressions describes complexity computation takes place applied arguments inria ocaml implementation situation even complex since resource usage time space depends function used call sites append partially applied one argument function closure created expected one reasons ocaml great append applied arguments time intermediate closure created performance function even better curried version since create pair application model resource usage curried functions accurately refine function types capture functions used call sites example append following types list list list list list list first type implies function partially applied second type implies function applied arguments time course possible function types technically achieve using let polymorphism second type system automatically derives tight time space bounds linear first argument however system fails derive bound first type reason made design decision derive bounds asymptotically depend data captured function closures keep complexity system manageable level fortunately append belongs large set ocaml functions standard library defined form let rec function partially applied computation happens creation closure result eta expansion change resource behavior programs means example safely replace expression let append expression let append prior analysis consequently always use type list list list append successfully analyze conditions functions analyzed might look complex first boiled simple principle resource usage function must expressible function sizes arguments arguments main challenge resource analysis functions arguments large extent problem successfully solved linear resource bounds previous work basically case towards automatic resource bound analysis ocaml reduced case arguments available necessary reanalyze functions every call site since abstract resource usage constraint system holes constraints function arguments however presentation system way mixes type checking type inference therefore chose present analysis system declarative way bound function arguments derived respect given set resource behaviors argument functions concrete advantage declarative view derive meaningful type function like map lists even argument available function map following types list list list list unlike append resource usage map depend size first argument types equivalent system except cost creating intermediate closure argument available previous systems produce constraint system meaningful user innovation work also able report meaningful resource bound map arguments available end assume argument function consume resources example report case map number evaluation steps needed number heap cells needed length input list bounds useful two purposes first developer see cost map contributes total cost program second time bound map proves map guaranteed terminate argument terminates every input contrast consider function list list list list defined follows let rec match let tail raml able derive bound since number evaluation steps even termination depends argument however raml derives tight bound function polynomial bounds inductive types existing aara systems either limited linear bounds polynomial bounds functions sizes simple predefined lists data structures contrast work presents first analysis derive polynomial bounds depend size parameters complex data structures bounds derive multivariate resource polynomials take account individual sizes inner data structures possible simplify resource polynomials user output essential precise information intermediate results derive tight bounds general resource bounds built functions count number specific tuples one form nodes data structure simplest form without considering data stored inside nodes form apr setting stage inductive data structure constructors apr denotes tree traversal tree able keep track changes quantities pattern matches data construction fully automatically generating linear constraints time allow accurately describe resource usage many common functions way done previously simple types interesting special case also derive conditional bounds describe resource usage conditional statement instance expression match true quicksort false derive bound quadratic length true constant false effects analysis handles references arrays ensuring resource cost asymptotically depend values stored mutable cells shown possible extend aara handle mutable state decided add feature current system focus presentation main contributions still lot possible interactions mutable state storing functions references setting stage describe formalize new resource analysis using core raml subset intermediate language use perform analysis expressions core raml form means syntactic forms allow variables instead arbitrary terms whenever possible without restricting expressivity automatically transform ocaml programs core raml without changing resource behavior analysis syntax purpose article syntax core raml expressions defined following grammar actual core expressions also contain constants operators primitive data types integer float boolean arrays operations arrays conditionals free versions syntactic forms free versions semantically identical standard versions contribute resource cost needed resource preserving translation code form ref fail tick match match share let let rec syntax contains forms variables function application data constructors lambda abstraction references tuples pattern matching recursive binding simplicity allow recursive definitions functions function application allow application several arguments useful statically determine cost closure creation also introduces ambiguity type system determine expression like parsed sharing expressions share standard used explicitly introduce multiple occurrences variable binds free variables expression fail used model exceptions expression tick contains towards automatic resource bound analysis ocaml floating point constant used tick metric specify constant cost negative floating point number means resources become available focus set language features since sufficient present main contributions work sometimes take liberty describe examples user level syntax use features data types described article operational cost semantics resource usage raml programs defined operational cost semantics semantics three interesting features first measures defines resource consumption evaluation raml expression using resource metric defines constant cost evaluation step cost negative resources returned second models terminating diverging executions inductively describing finite subtrees infinite execution trees third models ocaml mechanism function application avoids creation intermediate function closures semantics core raml formulated respect stack store arguments function application environment heap let loc infinite set locations modeling memory addresses heap finite partial mapping loc val maps locations values environment finite partial mapping var loc variable identifiers locations argument stack finite list locations assume every heap contains distinguished location null dom null null set raml values val given value val either location loc tuple locations function closure node data structure constructor location function closure environment expression variable since also consider resources like memory become available evaluation track watermark resource usage maximal number resource units simultaneously used evaluation derive watermark sequence evaluations watermarks sub evaluations one also take account number resource units available sub evaluation operational evaluation rules figure figure formulated respect resource metric define evaluation judgment form expresses following argument stack environment initial heap given expression evaluates location new heap evaluation needs resource units watermark evaluation resource units available actual resource consumption quantity negative resources become available execution two behaviors express semantics failure array access outside array bounds divergence end semantic judgement evaluates expressions values also error incomplete computations expressed judgement general form intuitively evaluation statement expresses watermark resource consumption number evaluation steps currently resource units left setting stage var var var var app bort app bind abs ind los let ppa let let rec rec cons ons match match figure rules operational semantics part towards automatic resource bound analysis ocaml tuple uple match matt share share ref ref null assign ssign hare dref fail fail tick null tick ndef ick figure rules operational semantics part resource metric defines resource consumption evaluation step semantics set constants write handy view pairs evaluation judgments elements monoid neutral element means resources neither needed evaluation returned evaluation operation defines account evaluation consisting evaluations whose resource consumptions defined respectively define resources never returned time elements form identify rational number element follows denotes denotes notation avoids case distinctions evaluation rules since constants appear rules negative semantic rules use notation indicate dom dom dom efficiency reasons inria ocaml compiler evaluates function applications right left starts evaluating way one avoid expensive creation intermediate function closures naive implementation would create function closures evaluating aforementioned expression one one application first argument etc starting last argument able put results evaluation argument stack access encounter function abstraction evaluation case create closure simply bind value stack name abstraction type system model treatment function application ocaml compiler use stack store locations function arguments rules push locations ppa pop locations stack modify leaf rules return function closure namely rules var variables lambda abstractions whenever would return function closure inspect argument stack contains location pop stack bind argument evaluate function body new environment defined rule ind indirectly rule var another rule modifies argument stack evaluate subexpression empty argument stack arguments stack evaluating let expressions consumed result evaluation argument stack accurately captures inria ocaml compiler behavior avoid creation intermediate function closures also extends naturally evaluation expressions form see section argument stack also necessary prove soundness multivariate resource bound analysis another important feature semantics model failing diverging evaluations allowing partial derivation judgments used derive resource usage steps technically realized rule bort applied point abort current evaluation without additional resource cost mechanism aborting evaluation visible rules evaluation let expression two possibilties first possibility evaluation subexpression aborted using bort point apply rule pass resource usage abort second possibility evaluates location apply bind variable evaluate expression example evaluation running example use running example defined figure illustrate operational cost semantics works end use metric steps assigns cost every evaluation step metric tick assigns cost every evaluation step let abs acons bcons bcons nil let expression arises concatenating appending expression asort abs code figure every exists tick steps moreover steps every let expression results appending btick abs code figure every exists tick tick every type system section introduce type system refinement ocaml type system type system mirror type system introduce particularities explain features types purpose article define simple types follows unit ref towards automatic resource bound analysis ocaml simple type unit type uninterpreted type variable type ref references type tuple type function type inductive data type two parts definition deserve explanation first bracket function types correspond standard function type meaning function applied first arguments time type indicates function applied first arguments one another two uses function result different resource behavior instance latter case create function closures also different costs account evaluation cost first argument present cost closure second argument present etc course possible function used different ways program account let polymorphism see following subsection also note still describes function describes function arguments second inductive types required particular form makes possible track costs depend size parameters values types course possible allow arbritary inductive types track cost extension straighforward present article assume constructor part one recursive type furthermore assume recursive type least one constructor inductive type sometimes write say node type branching number constructor maximal branching number max constructors branching number let polymorphism sharing following design type system type system affine means variable context used expression however enable multiple uses variable sharing expression share denotes used twice using different names input programs allow multiple uses variable expression raml introduce sharing constructs replace occurrences new names analysis interestingly mechanism closely related let polymorphism see relation first note type system polymorphic value used single type expression practice would mean instance define different map function every list type simple solution problem often applied practice let polymorphism principle let polymorphism replaces variables definitions type checking map function would mean type expression map map instead typing expression let map map principle would possible treat sharing variables similar way let polymorphism start expression let replace occurrences expression also change resource consumption evaluation evaluate multiple times interestingly problem coincides treatment let polymorphism expressions side effects called value restriction raml support let polymorphism function closures assume function definition let used twice usual approach enable analysis system would use sharing let share type system enable let polymorphism however define twice ensure pay creation closure let binding let let functions different types method cause exponential blow size expression nevertheless appealing enables treat resource polymorphism way let polymorphism type judgements type judgements form list types var type context maps variables types core expression simple type intuitive meaning formalized later section follows given evaluation environment matches type context argument stack matches type stack evaluates value type terminate interesting feature type judgements handling bracket function types even though function types multiple forms expression often unique type given type context type derived way function used instance two function types unique type expression unique type derivation produces type judgement empty type stack call canonical type derivation closed type judgement function type second type derivation call open type derivation derives open type judgement following lemma proved induction type derivations lemma open canonical type judgements interchangeable open type judgement appear derivation open root form subtree derivation whose root closed judgement form words open derivation expression function applied arguments time given type context fixed function type expression one open type derivation type rules figure presents selected type rules type system usual denotes union type contexts provided dom dom thus implicit side condition dom dom whenever occurs typing rule especially writing means variables pairwise distinct close correspondence evaluation rules type rules sense every evaluation rule corresponds exactly one type rule view two rules pattern match let binding one rule respectively type stack modified rules var ush ush ush every leaf rule return function type var ush add second rule derives equivalent open type reason becomes clear type system section rules directly control shape function types ush lambda abstraction rules deterministically syntax driven rules towards automatic resource bound analysis ocaml var var ush ush match ons match share let hare let rec ref ref ref ush fail fail uple ush ref eak ref unit tick unit ick figure rules affine type system ssign type system dom ref tvar null unit nit uple ons figure coinductively relating heap cells semantic values lambda abstraction introduce choice shapes functions types however often one possible choice depending abstracted function used mentioned type system affine every variable context used typed expression multiple uses introduced explicitly using rule hare exception rule allow use context body defined functions reason apparent resource aware version sharing function types always possible without restrictions environments simple type inductively define set values type goal advance state art denotational semantics rather capture tree structure data structures stored heap end distinguish mainly inductive types possible inner nodes trees types leaves formulation type soundness also require function closures simply interpret polymorphic data set locations loc loc set trees node labels inductively defined follows heap location type write mean defines semantic value pointers followed obvious way judgment formally defined figure heap may exist different semantic values simple types however fix simple type heap exists one value proposition let heap loc let simple type towards automatic resource bound analysis ocaml write indicate exists necessarily unique semantic value environment heap respect context holds every dom write similarly argument stack respect type stack heap written note rules figure interpreted coinductively reason rule location part closure environment closure created rule influence coinductive definition proofs minimal since proofs article induction type preservation theorem shows evaluation expression wellformed environment results environment theorem theorem proved induction evaluation judgement multivariate resource polynomials section define set resource polynomials search space automatic resource bound analysis resource polynomial maps semantic value simple type rational number analysis typical polynomial computations operating list shows often consist operations executed every simplest examples linear map operations perform operation every common examples sorting algorithms perform comparisons every pair worst case article generalize observation data structures lists different node types constructors linear computation instance often carried nodes general typical polynomial computation carried tuples list element constructor appears list previous work considered binary trees essentially interpret treelike data structures lists different nodes flattening result resource polynomials depend number nodes certain kind tree structural measures like height tree include height resource polynomials general way would need way express maximum choice resource polynomials leave future research favor compositionality modularity compositionality useful potential data structure invariant changes structure tree base polynomials indices figure define simple type set functions map values type natural numbers resource polynomials type given rational linear combinations base polynomials let inductive type let list inductively define set follow multivariate resource polynomials figure defining set base polynomials type figure defining set indices type set nodes tree pre pre pre like lambda calculus use notation anonymous function maps argument natural number defined expression every set contains constant function case inductive data type constant function arises also one element sum empty product figure inductively define simple type set indices tuple types identify index index similarly identify index index inductive types let base type index define base polynomial follows examples illustrate definitions construct set base polynomials different data types first consider inductive type singleton one constructor without arguments singleton nil towards automatic resource bound analysis ocaml nil singleton see first examine set tuples nil different list constructors tree nil contain tuples size thus nil case empty sum remaining constructor lists nil always nil singleton sum furthermore nil nil nil nil nil unit let consider usual sum type sum left right left right define otherwise sum left right next example list type list cons nil nil cons nil write cons furthermore cons cons generally let cons cons cons cons nil lists length respectively hand nil since list list finally consider list type two different cons running example figure nil write similarly list let example multivariate resource polynomials denotes number list therefore consider set constructor lists contains exactly elements form elements form means products binomial coefficients sums base polynomials coinductive types like stream inhabited language since interpret inductively data structure type created since allow recursive definitions functions spurious indices previous examples illustrate inductive data structures different indices encode resource polynomial example type list nil lists additionally indices encode polynomial constantly zero type list example case nil call indices spurious practice beneficial spurious indices index sets since slow analysis without useful components bounds straightforward identify spurious indices data type definition index example spurious branching number resource polynomials resource polynomial simple type nonnegative linear combination base polynomials write set resource polynomials base type running example consider running example figure function abmap derived bound corresponds following resource polynomial acons bcons function asort derived bound corresponds resource polynomial acons acons acons selecting finite index set every resource polynomial defined finite number base polynomials implementation also fix finite set indices make possible effective analysis selection indices track customized inductive data type every program however currently allow user select maximal degree bounds track indices correspond polynomials smaller degree towards automatic resource bound analysis ocaml type system section describe type system essentially annotate simple type system section resource annotations type derivations correspond proofs resource bounds type annotations use indices base polynomials define type annotations resource polynomials type annotation simple type defined family write set type annotations type annotated type pair type annotation simple type defined follows unit ref define simple type obtained removing type annotations function types function type annotated set set contain multiple valid resource annotations arguments result function potential annotated types contexts let annotated type let heap let value type annotation defines potential finitely many usually define type annotations stating values coefficients type annotation also write use type system need extend definition resource polynomials type contexts stacks treat like tuple types let type context let list types index set defined type annotation family denote context let heap environment let furthermore argument stack potential respect particular sometimes also write type system folding potential annotations key notion type system folding potential annotations used assign potential typing contexts result pattern match unfolding application constructor inductive data type folding folding potential annotations conceptually similar folding unfolding inductive data types type theory let inductive data type let type stack context let context annotation ccb respect annotation ccb context defined concatenation lists lemma let inductive data type let annotated context ccb sharing let annotated context sharing operation defines annotation context form used potential split multiple occurrences variable lemma shows sharing linear operation lead loss potential lemma let data type natural numbers following holds every context every holds coefficients computed effectively however able derive closed formula coefficients proof similar previous work context define lemma type judgements type judgement form annotated context resource metric annotated type type annotation intended meaning judgment resource units available sufficient cover evaluation cost metric addition least resource units left evaluates value notations families describe type context annotations denoted upper case letters optional superscripts use convention elements families corresponding lower case letters corresponding superscripts annotations index set extend operations pointwise example write every write state let context let write index towards automatic resource bound analysis ocaml let annotation context define projection annotation way define annotations cost free types write refer type judgments metric constants use assign potential extended context let rule info available previous work subtyping usual subtyping defined inductively types structurally identical interesting rule one function types function type subtype another function type allows resource behaviors result types treated covariant arguments treated contravariant unsurprisingly type system principle types allow typing examples section principle type would assume weakest type arguments function types annotated empty sets type annotations would mean use functions arguments however possible derive principle type fixed argument types would derive possible annotations function annotation possible annotations appear function annotations result type take algorithmic view previous work express principle type function set constraints holes constraint sets higherorder arguments however unclear type means user prefer declarative view clearly separates type checking type inference open problem constraint based principle types polymorphism type rules figure figure contain type rules annotated types integrated new concepts rules look similar rules previous papers rule var applied type stack empty simply accounts cost var passes potential assigned variable type context result type type stack empty rule var ush applied case variable must function type look possible type annotation arguments result type context account cost variable var behave specified account cost function application cost handled rules ush rules ush correspond simple type rules ush app assume type stack empty account cost applying function arguments look valid potential annotations function body function annotation require potential specified available return potential specified rule ush account two applications first account function application rule assume return type function type apply arguments stored type stack rule var ush rules ush lambda abstraction correspond rules ush simple type system use pop type system var var var app app var ush bind ush abs ush otherwise mat match ccb cons ons share share hare let rec abs let rec figure type rules annotated types part ind towards automatic resource bound analysis ocaml tuple uple matt match fail fail tick unit ref ref ref tick fail dref ref assign ref unit eak otherwise ssign eak ush ick ref dref ubtype figure type rules annotated types part ubtype type system type stack rule create function annotation essentially deriving every however throw away potential depends context use potential assigned arguments annotation rule ons assigns potential new node inductive data structure additive shift ccb transforms annotation annotation context lemma shows potential neither gained lost operation potential context pay potential resulting list resource cost cons construction new node rule shows treat pattern matching initial potential defined annotation context sufficient pay costs evaluation depending whether matched succeeds potential defined annotation result type type expression basically use annotation paying constant match cost type expression rely additive shift ccb results annotation context loss potential see lemma equalities relate potential evaluation potential evaluation match operation incorporating respective resource cost matching rule hare uses sharing operation related potentials defined matching loss potential see lemma rule result evaluation expression bound variable problem arises resulting annotated context features potential functions whose domain consists data referenced well data referenced type context potential related data referenced free variables variables type context express relations mixed potentials evaluation introduce new auxiliary binding judgement rule ind intuitive meaning judgement following assume evaluated context dom evaluates value bound variable initial potential larger cost evaluating metric plus potential resulting context lemma formalizes intuition lemma let formally lemma consequence soundness type system theorem inductive proof theorem use weaker version lemma soundness type judgements lemma additional precondition rule similar rule standard type systems cost creation closures accounted abs difficult relate cost number captured variables closure refrain favor simplicity initial potential defined flows potential towards automatic resource bound analysis ocaml used pay cost evaluating expression potential annotation used typing recursive functions unconstrained bug uses fact used pay cost closure creation rule rule fail require constant potential fail available contrast rules relate initial potential resulting potential intuitively sound program aborted evaluating expression fail consequence rule fail type expression let fail constant initial potential fail regardless resource cost expression rule ick simply require tick constant potential available rule require availability constant potential ref discard remaining potential assigned since references carry potential system coefficient rule ssign simply pay cost operation assign discard potential assigned arguments since return value coefficient rule ref discard potential arguments also require coefficients annotation result zero references carry potential structural rules eak eak ubtype ubtype apply every expression implementation integrated syntax directed rules enable automatic type inference expected used exact places would use corresponding rules standard type system instance combining branches weakening subtyping match expressions constructing inductive data structures subtyping rule eak relies fact annotated type remains sound add potential context remove potential result similarly rule eak states add variables arbritary type context rules ubtype ubtype similar standard rules subtyping soundness goal prove following soundness statement type judgements intuitively says initial potential upper bound watermark resource usage matter long execute program prove statement induction need prove stronger statement takes account return value annotated type moreover previous statement true values respect types required therefore adapt definition environments annotated types simply replace rule figure following rule course refers newly defined judgment addition aforementioned soundness theorem states stronger property terminating evaluations expression evaluates value environment difference initial final potential upper bound resource usage evaluation theorem soundness let implementation bound inference input program parser type inference ocaml bytecode typed ocaml syntax tree raml compiler raml analyzer explicit let polymorphism multivariate aara inference type checking typed raml syntax tree solver frontend resource type interpretation normal form resource metrics ocaml bindings clp resource bounds figure implementation raml theorem proved nested induction derivation evaluation judgment type judgment inner induction type judgment needed structural rules one proof possible instantiations resource constants sole induction type judgement fails size type derivation increase case function application retrieve type derivation function body judgement defined updated rule structure proof matches structure previous soundness proofs type systems based aara induction case many rules similar induction cases corresponding rules multivariate aara programs linear aara programs one thing additional complexity introduced new resource polynomials data types designed system additional complexity dealt locally rules ons hare soundness rules follows directly application lemma lemma respectively previous work judgement captures type derivations enables treat function abstraction application similar fashion case coinductive definition judgement cause difficulties major novel aspect proof typed argument stack also carries potential surprisingly typed stack simply treated like typed environment proof already incorporated shift share operations lemma lemma deal mutable heap requiring array elements influence potential array result prove following lemma used proof theorem lemma implementation bound inference figure shows overview implementation raml consists lines ocaml code excluding parts reused inria ocaml implementation development took around person months found helpful develop implementation theory parallel many theoretical ideas inspired implementation challenges towards automatic resource bound analysis ocaml reuse parser type inference algorithm ocaml derive typed ocaml syntax tree source program analyze function applications introduce bracket function types end copy lambda abstraction every call site still implement unification algorithm since functions let defined partial application may used different call sites moreover deal functions stored references next step convert typed ocaml syntax tree typed raml syntax tree furthermore transform program form without changing resource behavior purpose syntactic form free flag specifies whether contributes cost original program example share forms introduced free also insert eta expansions whenever influence resource usage compilation phase perform actual multivariate aara program form resource metrics easily specified user include metric heap cells evaluation steps ticks ticks allows user flexibly specify resource cost programs inserting tick commands possibly negative number principle actual bound inference works similarly previous aara systems first fix maximal degree bounds annotate types derivation simple types variables correspond type annotations resource polynomials degree second generate set linear inequalities express relationships added annotation variables specified type rules third solve inequalities fantastic solver clp solution linear program corresponds type derivation variables type annotations instantiated according solution objective function contains coefficients resource annotation program inputs minimize initial potential modern solvers provide support iterative solving allows express minimization annotations take priority type system use implementation significantly differs declarative version describe article one thing use algorithmic versions type rules inference rules integrated syntaxdirected ones another thing annotate function types set type annotations function returns annotation result type presented annotation return type annotations symbolic actual numbers yet determined solver function annotations side effect sending constraints solver would possible keep constraint set respective function memory send copy fresh variables solver every call however efficient lazily trigger constraint generation function body every call site function provided return annotation make resource analysis expressive also allow recursion means need type annotation recursive call differs annotations argument result types function infer types successively infer type annotations higher degree details found previous work part constraints form network problem solvers handle network problems efficiently practice clp solves constraints raml generates linear time problem sizes large save memory time reducing number constraints generated typing representative example optimization try reuse constraint names instead producing constraints like experimental evaluation raml provides two ways analyzing program main mode raml derives bound evaluation cost main expression program last expression list let bindings module mode raml derives bound every let binding function type apart analysis also implemented conversion derived resource polynomials polynomial bounds pretty printer raml types expressions additionally implemented efficient raml interpreter use debugging determine quality bounds experimental evaluation development raml driven ongoing experimental evaluation ocaml code goal ensure derived bounds precise different programming styles supported analysis efficient existing code analyzed experimental evaluation applied automatic resource bound analysis functions lines code source code raml well ocaml files used experiments available online website also provides interactive web interface used experiment raml analyzed code limitations experiments performed four sets source code extracted ocaml code coq specifications compcert ocaml tutorial code ocaml standard library handwritten code handwritten part mostly implemented classical textbook algorithms use cases inspired applications textbook algorithms include algorithms matrices graph algorithms search algorithms classic examples amortized analysis functional queues binary counters use cases include energy management autonomous mobile device calling amazon dynamo ocaml see section ocaml complex programming language raml yet support language features ocaml includes modules features record types equality strings nested patterns calls native functions therefore currently hard apply raml directly existing ocaml code however support many additional features theoretical limitation analysis rather caused lack resources implementation side raml applied existing code results satisfactory instance applied raml ocaml standard list library raml automatically derives bounds functions derived bounds asymptotically tight functions bounded raml use variation merge sort whose termination thus resource usage depends arithmetic shift currently unsupported file consists lines code analysis takes macbook pro also note technique extends previous works aara strict sequential evaluation thus handle examples previously evaluated quality derived bounds identical previous ones efficiency analysis improved raml fails resource usage bounded measure depends semantic property program measure depends difference sizes two data structures loose bounds often result dependencies instance behaviors two functions might triggered different towards automatic resource bound analysis ocaml let comp fun let rec walk match fun match left fun comp walk fun right let right quicksort fun comp walk fun let walk raml output run time num comp argument fraction component arg maximal number component argument fraction component arg figure modified challenge example avanzini shortened output automatic bound analysis performed raml function derived bound tight bound number evaluation steps semantics take account cost argument inputs however analysis would add behaviors functions program another reason loose bounds tight bound represented multivariate resource polynomial example experiment give impression experiments performed figure contains output analysis challenging function raml code adoption example recently presented avanzini function handled existing tools illustrate challenges resource analysis programs avanzini implemented somewhat contrived reverse function rev lists using functions raml automatically derives tight linear bound number evaluation steps used rev show features analysis modified avanzini rev figure adding additional argument pattern match definition function walk resulting type walk bool list either list list either list list either list like modification walk essentially function lists however assume input lists contain nodes form left right list reverse process first list argument sort list contained using standard implementation quick sort given raml derives tight bound shown figure since comparison function experimental evaluation compcert metric funs loc time const lin quad cubic poly fail steps heap tick steps asym tight table overview experimental results quicksort argument available raml assumes consume resources analysis applied concrete argument analysis repeated derive bound instance compcert evaluation also performed evaluation ocaml code created coq code extraction mechanism compilation verified compcert compilers sorted files topologically dependency requirements analyzed files top could process files dependency graph heavily relied modules currently support using metric analyzed functions loc seconds figure shows example functions compcert code base artifact coq code extraction compcert uses two implementations reverse function lists function rev naive quadratic implementation uses append function rev efficient linear implementation raml automatically derives precise evaluation step bounds functions result coq user inspecting derived bounds extracted ocaml code likely spot performance problems resulting use rev summary results table contains compilation experimental results first rows show results ocaml libraries handwritten code ocaml tutorial last column shows results compcert column loc contains total number lines ocaml code analyzed respective metric similarly column time contains total time analyses metric column poly contains number functions raml automatically derived bound columns const lin quad cubic show number derived bounds constant linear quadratic cubic finally columns fail contain number examples raml unable derive bound number bounds asymptotically tight respectively also experimented example inputs determine precision constant factors bounds general bounds precise often match actual behavior however yet perform systematic evaluation automatically generated example inputs reported numbers result analysis functions exceptions recursive higher order appendix contains short description every function part evaluation along type run time analysis derived bounds functions automatically analyzed using steps metric counts number evaluation steps heap metric counts number allocated heap cells moreover used tick metric add custom cost measures functions measures vary program program include number function calls list analyzed files functions included towards automatic resource bound analysis ocaml let rec app match app let rec rev function app rev let rec match let rev raml output rev run time steps metric raml output rev run time steps metric figure two implementations rev compcert derived bounds tight one linear quadratic energy consumption amount data sent cloud details found source code two main reasons difference runtime analysis per function compcert code evaluated code first compcert code contains complex data structures thus track coefficients second larger percentage functions derive bound result raml looks bounds higher degree giving leads larger number constraints solve solver finally outlier functions cause unusually long analysis time possibly due performance bug general analysis efficient raml slowing analyzed program contains many variables functions many arguments another source complexity maximal degree polynomials search space depending complexity program analysis becomes unusable searching bounds maximal degree efficiency could improved combining amortized resource analysis analyses heuristics predict parts input cause resource usage case study bounds dynamodb queries integrated analysis inria ocaml compiler enables analyze compile real programs interesting use case resource bound analysis infer bounds dynamodb queries dynamodb commercial nosql cloud database service part amazon web services aws amazon charges dynamodb users combination number queries transmitted fields throughput since dynamodb nosql case study bounds dynamodb queries service often possible retrieve whole expensive large data single entries identified key value dynamodb api available opam package aws make api available analysis using tick functions specify resource usage since query cost different tables different provide one function per action table let following describe analysis specific ocaml application uses database contains large table stores grades students different courses first function computes average grade student given list courses let let acc cid let length sum acc let grade match cid none raise cid length sum grade let length sum foldl sum length raml computes tight bound length argument omit standard definitions functions like foldl map however systems bounds derived form first principles next sort given list students based average grades given list classes using quick sort first approximation use comparison function based let geq results database queries number students number courses reason comparisons run quick sort since resource usage quick sort depends number courses make list courses explicit argument store closure comparison function let rec partition acc match let acc let aux acc let acc aux aux else aux partition acc let rec qsort aux match let partition aux append qsort aux qsort aux let qsort geq towards automatic resource bound analysis ocaml raml computes tight bound length argument length argument negative factor arises translation resource polynomials standard basis given alarming cubic bound reimplement sorting function using memoization end create table looks stores student course grade dynamodb replace function function lookup let lookup sid cid table let find fun sid table find fun cid resulting sorting function raml computes tight bound related work work builds past research automatic amortized resource analysis aara aara introduced hofmann jost strict functional language data types technique applied functional programs user defined types derive bounds programs lazy evaluation programs code integrating separation logic aforementioned systems limited linear bounds hoffmann presented multivariate aara language lists binary trees hofmann moser proposed generalization system context term rewrite systems however unclear automate system article introduce first aara able automatically derive multivariate polynomial bounds depend inductive data structures system one derive polynomial bounds functions even linear bounds analysis expressive existing systems strict languages instance first time derive bound curried append function lists moreover integrated aara first time existing industrialstrength compiler type systems inferring verifying resource bounds extensively studied vasconcelos described automatic analysis system based derives linear bounds functional programs derive polynomial bounds dal lago introduced linear dependent types obtain complete analysis system time complexity lambda calculus crary weirich presented type system specifying certifying resource consumption danielsson developed library based dependent types manual cost annotations used complexity analyses functional programs advantage technique fully automatic classically cost analyses often based deriving solving recurrence relations approach pioneered wegbreit actively studied imperative languages works concerned functions bounds depend data structures benzinger applied wegbreit method automatic complexity analysis nuprl terms however complexity information functions provided explicitly grobauer reported mechanism automatically derive cost recurrences dml programs using dependent types danner propose interesting technique references derive recurrence relations functional programs solving recurrences discussed works contrast work able automatically infer bounds abstract interpretation based approaches resource analysis focus integer programs loops cicek study type system incremental complexity active area research techniques term rewriting applied complexity analysis sometimes combination amortized analysis techniques usually restricted programs time complexity recently avanzini proposed complexity preserving defunctionaliztion deal programs transformation asymptotically complexity preserving unclear whether technique derive bounds precise constant factors finally exists research studies cost models formally analyze parallel programs blelloch greiner pioneered cost measures work depth advanced cost models take account caches see blelloch harper however works provide machine support deriving static cost bounds conclusion presented important first steps towards practical automatic resource bound analysis system ocaml three main contributions novel automatic resource analysis system infers multivariate polynomial bounds depend size parameters userdefined data structures first aara infers polynomial bounds functions integration automatic amortized resource analysis ocaml compiler title article indicates many open problems left way resource analysis system ocaml used development future plan improve bound analysis programs exceptions also work mechanisms allow user interaction manually deriving bounds automation fails furthermore work taking account garbage collection runtime system deriving time space bounds finally investigate techniques link bounds hardware code produced compiler open questions certainly challenging tools push boundaries practical quantitative software verification references albert arenas genaim puebla automatic inference resource consumption bounds logic programming artificial intelligence reasoning conference lpar pages albert arenas genaim puebla upper bounds static cost analysis journal automated reasoning pages albert arenas genaim puebla zanardini cost analysis java bytecode euro symp prog esop pages towards automatic resource bound analysis ocaml albert arenas genaim puebla zanardini cost analysis objectoriented bytecode programs theor comput albert resource analysis tools algorithms construction analysis systems international conference tacas pages alias darte feautrier gonnord rankings program termination complexity bounds flowchart programs int static analysis symposium sas pages genaim limits classical approach cost analysis int static analysis symp sas pages atkey amortised resource analysis separation logic euro symp prog esop pages avanzini lago moser analysing complexity functional programs meets int conf functional programming icfp avanzini moser combination framework complexity international conference rewriting techniques applications rta pages benzinger automated complexity analysis theor comput blanc henzinger hottelier abc algebraic bound computation loops logic reasoning int conf lpar pages blelloch greiner provable time space efficient implementation nesl int conf funct prog icfp pages blelloch harper cache efficent functional algorithms acm symp principles prog langs popl pages brockschmidt emmes falke fuhs giesl alternating runtime size complexity analysis integer programs tools alg constr anal systems int conf tacas pages campbell amortised memory analysis using depth data structures euro symp prog esop pages carbonneaux hoffmann shao compositional certified resource bounds conf prog lang design impl pldi henzinger radhakrishna zwirchmayr segment abstraction execution time analysis european symposium programming esop pages garg acar refinement types incremental computational complexity european symposium programming esop pages references crary weirich resource bound certification acm symp principles prog langs popl pages crosby wallach denial service via algorithmic complexity attacks usenix security symposium usenix danielsson lightweight semiformal time complexity analysis purely functional data structures acm symp principles prog langs popl pages danner licata ramyaa denotational cost semantics functional languages inductive types int conf functional programming icfp danner paykin royer static cost analysis language workshop prog languages meets prog verification plpv pages resource analysis complex programs cost equations programming languages systems asian symposiu aplas pages grobauer cost recurrences dml programs int conf funct prog icfp pages gulwani mehra chilimbi speed precise efficient static estimation program computational complexity acm symp principles prog langs popl pages hoffmann types potential polynomial resource bounds via automatic amortized analysis phd thesis hoffmann raml web site http hoffmann aehlig hofmann multivariate amortized resource analysis acm symp principles prog langs popl hoffmann aehlig hofmann multivariate amortized resource analysis acm trans program lang hoffmann hofmann amortized resource analysis polymorphic recursion partial operational semantics prog langs systems asian symposium aplas hoffmann hofmann amortized resource analysis polynomial potential euro symp prog esop hoffmann shao amortized resource analysis integers arrays international symposium functional logic programming flops hofmann jost static prediction heap space usage functional programs acm symp principles prog langs popl pages towards automatic resource bound analysis ocaml hofmann jost amortised analysis euro symp prog esop pages hofmann moser amortised resource analysis typed polynomial interpretations rewriting typed lambda calculi pages hofmann moser multivariate amortised resource analysis term rewrite systems international conference typed lambda calculi applications tlca pages hofmann rodriguez automatic type inference amortised analysis euro symp prog esop pages hughes pareto sabry proving correctness reactive systems using sized types acm symp principles prog langs popl pages jin song shi scherpelz understanding detecting performance bugs conference programming language design implementation pldi pages jost hammond loidl hofmann static determination quantitative resource usage programs acm symp principles prog langs popl pages kocher timing attacks implementations rsa dss systems advances cryptology annual international cryptology conference crypto pages lago gaboardi linear dependent types relative completeness ieee symp logic computer science lics pages lago petit geometry types acm symp principles prog langs popl pages leroy formal verification realistic compiler communications acm leroy doligez frisch garrigue vouillon ocaml system release technical report institut national recherche informatique automatique http nicollet problems solved ocaml https noschinski emmes giesl analyzing innermost runtime complexity term rewriting dependency pairs autom reasoning olivo dillig lin static detection asymptotic performance bugs collection traversals conference programming language design implementation pldi pages vasconcelos florido jost hammond automatic amortised analysis dynamic memory allocation lazy functional programs int conf funct prog icfp pages references sinn zuleger veith simple scalable approach bound analysis amortized complexity analysis computer aided verification int conf cav page vanderbei linear programming foundations extensions springer vasconcelos space cost analysis using sized types phd thesis school computer science university andrews vasconcelos hammond inferring costs recursive polymorphic higherorder functional programs int workshop impl funct langs ifl pages lncs vasconcelos jost florido hammond allocation analysis lazy functional languages european symposium programming esop pages wegbreit mechanical program analysis commun acm zuleger sinn gulwani veith bound analysis imperative programs abstraction int static analysis symp sas pages towards automatic resource bound analysis ocaml experimental results analyzed functions name type file problems ocaml last list option lasttwo list option int list option natat nat list option loc description length rev eqlist ispalindrome flatten compress pack list int list list list list bool list bool node list list list list list list list encode decode duplicate replicate drop split slice concat rotate removeat insertat constructlist random min randselect lottoselect list int list int list list list list list int list list int list list int list list list int int list list list list list int list list int list list int list int int list int int int int int list int list int int int list snd fst map insert sort compare lengthsort file list list int list list int list list int int int list list list list returns last element list returns last two elements list outputs element particular location implemented using natural numbers defined inductively returns length list reverses list checks equality two lists checks list palindrome flattens tree list deletes successive duplicates packs successive duplicates inner list encoding list decodes encoding list duplicates element list replicates element list times drops every element splits list two lists extracts slice list concatenates two lists rotates list positions removes list position inserts element position constructs list element generates random number returns min two integers generates random permutation list composes randselect constructlist returns second element product returns first element product applies every element list sort helper sorts list according compare function compares two integers sorts list lists according size list int bool int bool boolexpr int int boolexpr bool bool bool list int int list helper constructs truth table expression int bool list boolexpr bool int bool list int list boolexpr int bool list bool list returns element list corresponding key evaluates boolean expression list int list int list list int float list float list float float list float list float list int float list assoc eval tablemake file size getelem returns size list returns element list returns element lists echelon helper echelon helper experimental results echelon helper concat tail float list list float list int float list list list list list list int list reverse head split list int list list list list list list list list int list list int int list list list int list list subtract float list list int float list list float list list int list float list list float list list float list list concatenates two lists returns list excluding first elements echelon helper echelon helper reverses list returns first elements list echelon helper splits list position returns two lists subtract row echelon helper takes matrix rows columns reduces upper triangular matrix list list int bool int int list list bool int int list list bool int int list list int int list list list list int int int int int int int list int list int int list int list list int list list int int list list int int list list int int list list bool int int int list list int int int list list int int int list list int int int list list int int int list list int int int list list list list list list list list list list list list list list list list list list list int list int list int int list int list list int list list int list list int list list int list list matrix helper matrix helper matrix helper matrix helper returns element matrix mat matrix helper matrix helper matrix helper matrix helper adds two matrices subtracts two matrices matrix helper matrix helper delete submat int int list int list int list int list int list list int list int list list int list list int list list int list list int int list list int int list list bool int int int list list int int int list list int int int list list list int list list list int int list list appends end list appends column col matrix matrix helper takes transpose matrix matrix helper matrix helper multiplies matrices naive implementation matrix helper matrix helper multiplies matrices performing dimensional sanity checks deletes element list deletes row column matrix file sendmsg msg int list unit event list unit event list unit file getelemmatrix plus minus append transpose prod linemult computeline mult sends list integers sends sensor data soon produced stores sensor data buffer sends specific events towards automatic resource bound analysis ocaml event list unit event list unit event list unit file partition bool list list list quicksort bool list list bool list either list list either list list list list list list list list list list list list list concatenates inner lists concatenates innermost lists concatenates innermost lists btree int btree option btree int btree option search binary trees search binary trees list list generate ordered pairs list bin list bin list bin list bin list bin list list bin list increment binary counter increments binary counter increments binary counter unit unit array calls function fold natural numbers successively apply functions stored array file add sub mult nat nat nat nat nat nat nat nat nat expr nat eval expr nat recursively add two natural numbers recursively subtract two nats recursively multiply two nats simple evaluater arithmetic expressions evaluater arithmetic expressions without aux funs list list list bool list list list bool list list int list list int list list splits list middle merges two sorted lists merge sort merge sort lists lists bool list list list bool list list int int list int int list int list list int list list partition quick sort quick sort quick sort integer pairs quick sort lists lists int binary list int exponentiation via squaring int list int list int longest common subsequence dynamic programming ablist ablist bool list ablist list ablist map sort inner lists file file dfs bfs file pairs file file file split merge mergesort file partition quicksort file file lcs file abmap asort debugging mode function application data sent specific time intervals partitions list two depending whether elements satisfy function performs quick sort list using comp comparator experimental results asort btick abfoldr file merge list file length cons nth append rev flatten concat map mapi iter iteri exists mem memq assoc assq find filter partition split combine merge chop sort bool list ablist list ablist int list int ablist int list int ablist ablist int int ablist int ablist sort inner lists raise exception use map tick every fold two nested folds bool list list list bool list list merge two lists interesting variant merge sort list int list list list list list list int list list list list list list list list list list list list list list list list int list list list list list unit int list unit list list list list list list list list list list unit list list list list bool list bool bool list bool bool list list bool bool list list bool list bool list bool list list list bool list bool list list list list bool list bool list list bool list list bool list list list list list list list list list int list list list int list list int list list int list list int list list length list list cons head list tail list nth element list list append right fold two lists check condition list elems check condition one list elem two lists exists two lists checks elem list mem uses equality lookup pairs assoc uses equality like mem pairs like memq pairs filter varient using compare filter varient using equality list find list find returns matches list filter list partition split list pairs zip two lists merge merge sort take first elements merge sort merge sort merge sort reverses list flattens list flattens list list map list map index reverse map iterate list iterate index list fold list fold list map two lists reverse map two lists iterate two lists left fold two lists towards automatic resource bound analysis ocaml file int int list float int int int list bool int list int list int list int list int list int list int int float list list bool list int int int int list list int int list int int float list list float int int float list list int list int int int int float list list bool int int float list list int list int list int list avarage looking grades dynanmodb compare students looking grades dynamodb sort students based avarage grade using look grades dynamodb memoize grades tables find value looking key look grade table avarage grade using table using table sorting using table list list list list list list list list list list list list collapses elements matrix list collapses elements matrix list collapses elements matrix list file remove nub int list int list bool int list int list list int list list int list list int list list checks two lists equal duplicates helper removes duplicate lists list lists file multlist int int list int list dyade int list int list int list list multiplies elements list constant multiplies elements two lists form matrix file filter int int list int list eratos int list int list int list int int int list int int int int int int int list int list int int list int list int list int list int int int int int int list int list int int list int int list int list int list int list int list int list int list int list int int list int list bool bitvector helper converts bit vector integer bitvector helper bitvector helper adds two bitvectors bitvector helper tree int list int int list int list int list int list tree int list collapses tree list inserts element sorted list performs insertion sort list performs insertion sort flattening tree find lookup file prototype file appendall file bittoint bittoint sum add add diff sub sub mult compare leq file flatten insert insertionsort flattensort file deletes elements list divisible first argument runs sieve eratosthenes algorithm list subtracts two bitvectors multiplies two bitvectors bitvector helper compares two bitvectors experimental results isortlist int list list int list list performs insertion sort list lists lists compared lexicographically returns first line zeros lcs helper computes max two integers computes new line recursively computes length table computes longest common subsequence two lists list list list int list int list int list int list int list int list int list splits list two merges two sorted lists buggy version mergesort correct version mergesort int list int list int list int list helper selection sort performs selection sort list list list list list list list list returns empty list enqueues element list enqueues list trees queue trees dequeue helper dequeues element queue constructs node tree bfs helper performs breadth first search tree dfs helper performs depth first search tree partitions list performing quick sort performs mutually recursive implementation quicksort presented hongwei zips lists together groups list list triples reverse helper reverses list mutual recursion mutual recursion function size change paper reimplemented helper late starting descending parameters split helper splits values according keys quicksort helper performs quicksort list sorts value lists minsort file firstline list int list right int list int max int int int newline int int list int list lcstable int list int list int list list lcs int list int list int file msplit merge mergesortbuggy mergesort file findmin minsort file empty enqueue enqueues copyover dequeue children list list list list list list list list list list list list list list breadth list list list list list list list startbreadth list list depth list list list startdepth list list file part int int list int list int list int list quicksortmutual int list int list file list list list list list list file list list list rev list list list list list list list list list list list list last list list list list list list list list list list list file insert int list int list list int list split int list list int list splitqs int int list int list int list quicksort int list int list sortall int list list int list list splitandsort file subtrees file attach pairs pairsaux pairsslow triples quadruples file makegraph dijkstra file compcert file prefix file uncurry file value file file implb xorb negb fst snd length app file eqb file file plus max min file remove rev rev towards automatic resource bound analysis ocaml int int list int list int list splits list according keys sorts inner lists tree tree list generates list subtrees tree list list list list list list list list list list list list list attaches first argument every element list generates distinct pairs list helper pairsslow slow implementation pairs generates distinct triples list generates distinct quadruples list int list int array array int array array int array int list int list bool int list int list bool list sigt sigt option bool bool bool bool bool bool bool bool list nat list list list comparison comparison comparison comparison bool bool bool bool bool bool bool reflect bool bool bool nat nat nat nat nat nat nat nat nat nat list list bool list bool list nat option bool list list list list list list list list list creates array based weighted graph list dijkstra algorithm experimental results map existsb forallb filter file file file succ add pred sub mul iter pow square size compare min max eqb leb ltb sqrtrem sqrt gcdn gcd ggcdn ggcd bool list list bool list list list list bool list bool bool list bool list bool list list nat nat bool nat nat bool nat nat bool nat nat comparison positive positive positive positive positive positive positive positive positive positive positive positive positive positive mask positive mask mask mask mask mask positive mask mask mask positive positive mask positive positive mask positive positive positive positive positive positive positive positive positive positive positive positive positive positive positive positive positive nat positive positive positive positive comparison comparison positive positive comparison positive positive positive positive positive positive positive positive bool positive positive bool positive positive bool positive positive positive mask positive mask positive positive mask positive positive nat positive positive positive positive positive positive nat positive positive positive positive positive positive positive positive positive positive positive positive positive positive positive ldiff shiftl shiftr testbit file div modulo discr towards automatic resource bound analysis ocaml positive positive positive positive positive nat positive positive nat positive positive positive positive positive positive nat bool positive bool positive positive nat nat positive nat positive positive positive positive comparison bool bool bool bool bool nat positive nat nat nat bool bool nat nat positive option experimental results lcm setbit clearbit ones lnot reflect reflect reflect bool unit unit unit unit bool unit unit unit unit bool unit unit unit bool unit unit unit bool towards automatic resource bound analysis ocaml bounds name step bound file problems ocaml last lasttwo natat length rev eqlist ispalindrome flatten compress pack encode decode duplicate replicate drop split slice concat rotate removeat insertat constructlist random min randselect lottoselect snd fst map insert sort compare lengthsort file assoc eval tablemake file size getelem concat tail reverse head split subtract analysis time constraints experimental results file getelemmatrix plus minus append transpose prod linemult computeline mult delete submat file sendmsg msg file partition quicksort file file dfs bfs file pairs file file file add sub mult eval file split merge mergesort file partition quicksort file file lcs file abmap asort asort btick abfoldr file merge list file length cons nth append rev flatten concat map mapi iter iteri exists mem memq assoc assq towards automatic resource bound analysis ocaml experimental results find filter partition split combine merge chop sort file find lookup file prototype file appendall file remove nub file multlist dyade file filter eratos file bittoint bittoint sum add add diff sub sub mult compare leq file flatten insert insertionsort flattensort file isortlist file firstline right max newline lcstable lcs file msplit merge mergesortbuggy mergesort file findmin minsort file empty enqueue enqueues copyover dequeue children breadth towards automatic resource bound analysis ocaml startbreadth depth startdepth file part quicksortmutual file file rev last file insert split splitqs quicksort sortall splitandsort file subtrees file attach pairs pairsaux pairsslow triples quadruples file experimental results makegraph dijkstra file compcert file prefix file uncurry file value file file implb xorb negb fst snd length app file eqb file file plus max min file remove rev rev map existsb forallb filter file file file succ add pred sub mul iter pow square size compare min max eqb leb ltb sqrtrem sqrt gcdn gcd ggcdn ggcd ldiff shiftl shiftr testbit file towards automatic resource bound analysis ocaml experimental results div modulo discr lcm setbit clearbit ones lnot towards automatic resource bound analysis ocaml experimental results bounds name heap bound file problems ocaml last lasttwo natat length rev eqlist ispalindrome flatten compress pack encode decode duplicate replicate drop split slice concat rotate removeat insertat constructlist random min randselect lottoselect snd fst map insert sort compare lengthsort file assoc eval tablemake file size getelem concat tail reverse head split subtract analysis time constraints file getelemmatrix plus minus append transpose prod linemult computeline mult delete submat file sendmsg msg file partition quicksort file file dfs bfs file pairs file file file add sub mult towards automatic resource bound analysis ocaml experimental results eval file split merge mergesort file partition quicksort file file lcs file abmap asort asort btick abfoldr file merge list file length cons nth append rev flatten concat map mapi iter iteri exists mem memq assoc assq towards automatic resource bound analysis ocaml find filter partition split combine merge chop sort file find lookup file prototype file appendall file remove nub file multlist dyade file filter eratos file bittoint bittoint sum add add diff sub sub mult compare leq file flatten insert insertionsort flattensort file isortlist file firstline right max experimental results newline lcstable lcs file msplit merge mergesortbuggy mergesort file findmin minsort file empty enqueue enqueues copyover dequeue children breadth startbreadth depth startdepth file part quicksortmutual file file rev last file insert split splitqs quicksort sortall splitandsort file subtrees file attach pairs pairsaux pairsslow triples quadruples file makegraph dijkstra file compcert file prefix file uncurry file value file file implb xorb negb fst snd length app file eqb file file plus max min file remove rev rev map existsb forallb filter file file file succ add pred towards automatic resource bound analysis ocaml experimental results sub mul iter pow square size compare min max eqb leb ltb sqrtrem sqrt gcdn gcd ggcdn ggcd ldiff shiftl shiftr testbit file div modulo discr lcm setbit clearbit ones lnot towards automatic resource bound analysis ocaml experimental results tick bounds name tick bound file problems ocaml last lasttwo natat length rev eqlist ispalindrome flatten compress pack encode decode duplicate replicate drop split slice concat rotate removeat insertat constructlist random min randselect lottoselect snd fst map insert sort compare lengthsort file assoc eval tablemake file size getelem concat tail reverse head split subtract analysis time constraints file getelemmatrix plus minus append transpose prod linemult computeline mult delete submat file sendmsg msg file partition quicksort file file dfs bfs file pairs file file file add sub towards automatic resource bound analysis ocaml experimental results mult eval file split merge mergesort file partition quicksort file file lcs file abmap asort asort btick abfoldr file merge list file length cons nth append rev flatten concat map mapi iter iteri exists mem memq assoc assq towards automatic resource bound analysis ocaml find filter partition split combine merge chop sort file find lookup file prototype file appendall file remove nub file multlist dyade file filter eratos file bittoint bittoint sum add add diff sub sub mult compare leq file flatten insert insertionsort flattensort file isortlist file firstline right experimental results max newline lcstable lcs file msplit merge mergesortbuggy mergesort file findmin minsort file empty enqueue enqueues copyover dequeue children breadth startbreadth depth startdepth file part quicksortmutual file file rev last file insert split splitqs quicksort sortall splitandsort file subtrees file attach pairs pairsaux pairsslow triples quadruples file makegraph dijkstra file compcert file prefix file uncurry file value file file implb xorb negb fst snd length app file eqb file file plus max min file remove rev rev map existsb forallb filter file file file succ add towards automatic resource bound analysis ocaml experimental results pred sub mul iter pow square size compare min max eqb leb ltb sqrtrem sqrt gcdn gcd ggcdn ggcd ldiff shiftl shiftr testbit file div modulo discr lcm setbit clearbit ones lnot towards automatic resource bound analysis ocaml
6
fast exact bregman divergence clustering jun allan kasper green jesper sindahl nielsen stefan schneider alexander mingzhou song abstract clustering problem points dimension however case exist exact polynomial time algorithms previous literature reported time dynamic programming algorithm uses space present new algorithm computing time using linear space improve even time generalize new algorithm work absolute distance instead squared distance work bregman divergence well aarhus university email jallan supported madalgo center massive data algorithmics center danish national research foundation aarhus university email larsen supported madalgo villum young investigator grant auff starting grant aarhus university email supported madalgo auff starting grant aarhus university email jasn supported madalgo university california san diego email stschnei supported nsf grant division computing communication foundations opinions findings conclusions recommendations expressed material authors necessarily reflect views national science foundation new mexico state university email joemsong introduction clustering problem grouping elements clusters element similar elements cluster assigned similar elements cluster one primary problem area machine learning known unsupervised learning clustering problem famous widely considered given multiset find centroids minimizing several results exist finding optimal clustering general forcing one turn towards heuristics even general dimension also general even hardness approximation results exist authors show exists approximate within factor optimal proved upper bound side best known polynomial time approximation algorithm approximation factor practice lloyd algorithm popular iterative local search heuristic starts random arbitrary clustering running time lloyd algorithm tknd number rounds local search procedure theory lloyd algorithm run convergence local minimum could exponential guarantee well solution found approximates optimal solution lloyd algorithm often combined effective seeding technique selecting initial centroids due gives expected approximation ratio initial clustering improved lloyd algorithm case problem particular time space dynamic programming solution case due work kmeans problem encountered surprisingly often practice examples data analysis social networks bioinformatics retail market natural try reasonable distance measures data considered define different clustering problems many choices sum squares euclidian distances define instance one could use norm instead special case known clustering also received considerable attention problems also best polynomial time approximation algorithms approximation factor authors consider define clustering bregman divergences bregman divergence generalize squared euclidian distance thus bregman clusterings include problem well wide range clustering problems defined bregman divergences like clustering kullbackleibler divergence cost interestingly heuristic local search algorithm bregman clustering basically approach lloyd algorithm clustering bregman divergences clearly well since includes clustering refer reader general problem version problem generalized algorithm problems bregman divergences achieving time space bounds results paper give theoretically practically efficient algorithm clustering problems particular clustering problem defined follows given find centroids minimizing cost min main results paper new fast algorithms first give algorithm computes optimal clustering runs time using optimal space time input already sorted algorithm also computes cost optimal clustering using clusters relevant instance model selection right improvement factor time space compared existing solution also supports computing cost constant factors hidden small expect algorithm efficient practice second show compute optimal clustering time using space algorithm mainly theoretical interest expect constants rather large opposed time algorithm algorithm compute optimal costs using clusters time algorithm relates natural regularized version clustering instead specifying number clusters beforehand instead specify cost using extra cluster minimize cost clustering plus cost number clusters used formally problem follows given compute optimal regularized clustering min arg min show problem solvable time input sorted lloyd algorithm implemented run time number rounds expect algorithm compute optimal clustering reasonable essentially time lloyd algorithm approximate problem compute clustering minimize sum absolute distances centroid compute minimize min algorithms generalize naturally solve problem time bounds problem let differentiable strictly convex function bregman divergence induced defined notice bregman divergence induced gives squared euclidian distance bregman divergences metrics since symmetric general triangle inequality necessarily satisfied however many redeeming qualities instance bregman divergences convex first argument albeit second see comprenhensive treatment bregman clustering problem defined find centroids minimize min bregman divergence case inputs assume computing bregman divergence evaluating derivative takes constant time show algorithms naturally generalize clustering using bregman divergence define cluster cost still maintaing running time implementation independent implementation time algorithm available package implementation clustering uses space outline section describe existing time algorithm clustering uses space section show compute output old algorithm using time space show improve running time finally section show new algorithms generalizes different cluster costs squared euclidian distance dynamic programming algorithm section describe previous time space algorithm presented also introduce definitions notation use new algorithm always assume sorted input input sorted start sorting time also remark could many ways partitioning point set computing centroids achieve cost instance case input identical points task hand find optimal solution let cost grouping one cluster optimal choice centroid arithmetic mean points lemma space data structure compute time using time preprocessing proof standard application prefix sums works follows definition access prefix sum arrays centroid cost easily computed constant time algorithm sketch algorithm computes optimal clustering using clusters prefixes input points using dynamic programming follows let cost optimally clustering clusters cost optimally clustering one cluster cluster cost computed time lemma min notice cost optimally clustering clusters cost clustering one cluster makes first point last rightmost cluster let argument minimizes arg min possible exists multiple obtaining minimal value make optimal clustering unique ties broken favour smaller notice first point rightmost cluster optimal clustering thus given one find optimal solution standard backtracking cluster optimal clustering one naively compute entry using takes time cell thus computed time using space exactly described new algorithms idea first new algorithm simply compute tables faster reducing time compute row time instead time improvement exploits monotonicity property values stored row explained section resulting time space solution assuming sorted inputs section shows reduce space usage retaining running time section show property allows solve time linear space solve regularized version time faster algorithm monotone matrices section reduce problem computing row searching implicitly defined matrix special form allows compute row linear time define cost optimal clustering using clusters restricted rightmost cluster largest cluster center contain elements convenience define cost clustering clusters last cluster empty means satisfies min definition consistent definition section relates follows min ties broken favor smaller defined section means compute row actually computing minj think matrix rows indexed columns indexed interpretation computing row corresponds computing row column index corresponds smallest value row particular entries correpond value index minimum entry row respectively problem finding minimum value every row matrix studied first need definition monotone matrix definition let matrix real entries let arg min index leftmost column containing minimum value row said monotone implies arg min arg min totally monotone submatrices authors showed following theorem finding arg min row arbitrary monotone matrix requires time whereas matrix totally monotone time fast algorithm totally monotone matrices known smawk algorithm refer cool name let relate clustering problem monotone means consider optimal clustering points clusters start adding points first smallest point last clusters increase move right new optimal clustering sounds like true turns thus applying algorithm monotone matrices fill row time leading time algorithm already great improvement authors use maximum instead minimum however show matrix induced problem fact totally monotone lemma matrix totally monotone proof remarks matrix totally monotone submatrices monotone prove totally monotone thus need prove two row indices two column indices holds notice values correspond costs clustering elements starting rightmost cluster element respectively since min proving min min min min true prove rearranging terms need prove holds property known concave concave short property used significantly speed algorithms including dynamic programming algorithms problems start handling special case case definition thus need show case since point amongst included one since thus cost taking two disjoint consecutive subsets points clustering two sets using optimal choice centroid clearly cost less clustering points using one centroid turn general case let mean mean assume case symmetric finally let denote cost grouping elements cluster centroid split cost cost elements cost elements trivially get since cost using optimal centroid secondly since elements less equal since mean points greater combining results see completes proof theorem computing optimal clustering sorted input size takes time construction cost optimal clustering computed store table cluster centers extracted time reducing space usage following show reduce space usage maintaining running time using space reduction technique hirschberg first observe row refers previous row thus one clearly forget row done computing row problem store backtrack find optimal solution following present algorithm avoids table entirely key observation following assume every prefix computed optimal cost clustering clusters note precisely set values stored row assume furthermore computed optimal cost clustering every suffix clusters let denote costs clearly optimal cost clustering clusters given min main idea first compute row row using linear space two compute argument minimizing split reporting optimal clustering two recursive calls one reporting optimal clustering points clusters one call reporting optimal clustering clusters recursion bottoms clearly report optimal clustering using linear space time full set points section already know compute row using linear space simply call smawk compute row throw away row even store done computing row observe table computed taking points reversing order negating values way obtain new ordered sequence points running smawk repeatedly point set produces table optimal cost clustering clusters since cost clustering clusters get row identical row reverse order entries summarize algorithm reporting optimal clustering follows let initially empty output list clusters append cluster containing points otherwise use smawk compute row row using linear space evicting row memory finished computing row time compute argument minimizing time evict row row memory recursively report optimal clustering points clusters appends output terminates recursively report optimal clustering points clusters algorithm terminates contains optimal clustering clusters given time algorithm uses space see first note evict memory used compute value minimizing recursing furthermore complete first recursive call evict memory used starting second finally recursion need make copy points suffices remember working subset inputs let denote time used algorithm compute optimal clustering sorted points clusters constant satisfies recurrence max cnk claim satisfies prove claim induction base case follows trivially inspection formula inductive step use induction hypothesis conclude max cnk max cnk ckn therefore ckn needed prove theorem computing optimal clustering sorted input size takes time uses space note compute cost optimal clustering ensure never delete last column cost matrix requires additional space even faster algorithm section show concave property proved cluster costs yields algorithm computing optimal clustering one given time result follows almost directly schieber gives algorithm aforementioned running time problem finding shortest path fixed length directed acyclic graph nodes weights satisfy concave property represented function returns weight given edge constant time theorem computing minimum weight path length two nodes directed acyclic graph size weights satisfy concave property takes time using space reduce problem directed graph problem follows sort input time let denote sorted input sequence input associate node add extra node define weight edge cost clustering one cluster edge weight computed constant time proof lemma particularly equation edge weights satisfy monge concave property finally compute optimal clustering use schiebers algorithm compute lowest weight path edges theorem computing optimal clustering input size given takes time using space relevant briefly consider parts schiebers algorithm relates clustering particular regularized version problem schiebers algorithm relies crucially algorithms given directed acyclic graph weights satisfy concave property computes minimum weight path time note difference problem compared search restricted paths edges regularized clustering consider regularized version clustering problem instad providing number clusters specify cost cluster ask minimize cost clustering plus penalty cluster used simplicity assume input points distinct set optimal clustering cost zero use cluster input point let increase towards infinity optimal number clusters used optimal solution monotonically decrease towards one zero clusters well defined let dmin smallest distance points input optimal cost using clusters less costly use clusters since added clustering using one less cluster smaller cost cluster letting increase inevitably lead miminum value clusters used optimal solution following pattern difference optimal cost using clusters clusters continuing yields interesting event sequence encodes relevant choices regularization parameter note algorithm actually yields since computes optimal cost reduction directed graph problem adding cost cluster used corresponds adding weight edge note edge weights clearly still satisfy concave property thus solving regularized version clustering correpoonds finding shortest path length directed acyclic graph weights satisfy concave property algorithms takes time theorem computing optimal regularized clustering sorted input size takes time notice actually use cost per cluster optimal solution using clusters optimal clustering means inputs integers solve problem simple application binary search time universe size extending distance measures following show generalize algorithm bregman divergences sum absolute distances retaining running time space usage bregman divergence bregman clustering section show algorithm generalizes bregman divergence first let remind bregman divergence bregman clustering let differentiable strictly convex function bregman divergence defined defined bregman clustering bregman clustering problem defined find clustering minimize min notice cluster center second argument bregman divergence important since bregman divergences general symmetric purpose clustering mention two important properties bregman divergences bregman divergence unique element minimizes summed distance multiset elements mean elements exactly squared euclidian distance one sense defining property bregman divergences second important linear separator property important clustering bregman divergences also relevavant bregman voronoi diagrams linear separators bregman divergences bregman divergences locus points equidistant two fixed points terms bregman divergence given corresponds hyperplane also points sits either side hyperplane voronoi cells defined using bregman divergences connected means particular two points hyperplane point points smaller closer points larger closer capture need observation simple distance lemma lemma given two fixed real numbers point point computing cluster costs bregman divergences since mean minizes bregman divergences centroids used optimal clusterings unchanged compared case prefix sums idea used implement data structure used lemma generalizes bregman divergences observed name summed area tables formula computing cost grouping points one cluster follows let arithmetic mean points follows bregman divergence cost consecutive subset input points centroid computed constant time stored prefix sums monge concave totally monotone matrix properties used section prove monge concave property matrix totally monotone mean minimizer sum distances multiset points elements clearly still true lemma follows algorithms specified generalize bregman divergence clustering sum absolute values problem replace sum squared euclidian distances sum absolute distances formally problem compute clustering minimizing min note norms reduce case also note minimizing centroid cluster longer mean points cluster median solve problem change centroid median even number points fix median exact middle point two middle elements making choice centroid unique bregman divergences need show compute cost new cost constant time also need compute centroid constant time argue cost monge moncave implies implicit matrix totally monotone arguments essentially completeness briefly cover computing cluster costs absolute distances surprisingly using prefix sums still allow constant time computation let compute centroid computed constant time access prefix sum table also observed monge concave totally monotone matrix monge concave totally monotone matrix argument bregman divergences squared euclidian distance remain valid since first still median points greater furthermore still holds elements less equal follows algorithms specified generalize problem acknowledgements wish thank pawel gawrychowski pointing important earlier work concave property references aggarwal klawe moran shor wilber geometric applications matrixsearching algorithm algorithmica aggarwal schieber tokuyama finding path graphs concave monge property applications discrete computational geometry ahmadian svensson ward better guarantees euclidean algorithms corr aloise deshpande hansen popat euclidean clustering machine learning arnaboldi conti passarella pezzoni analysis ego network structure online social networks privacy security risk trust passat international conference international confernece social computing socialcom pages ieee arthur vassilvitskii slow method proceedings annual symposium computational geometry scg pages new york usa acm arthur vassilvitskii advantages careful seeding proceedings eighteenth annual symposium discrete algorithms pages society industrial applied mathematics awasthi charikar krishnaswamy sinop hardness approximation euclidean arge pach editors international symposium computational geometry socg june eindhoven netherlands volume lipics pages schloss dagstuhl fuer informatik banerjee merugu dhillon ghosh clustering bregman divergences mach learn boissonnat nielsen nock bregman voronoi diagrams discrete computational geometry hirschberg linear space algorithm computing maximal common subsequences commun acm june hirschberg larmore least weight subsequence problem siam journal computing jeske jogler petersen sikorski jogler genome mining phenotypic microarrays planctomycetes source novel bioactive molecules antonie van leeuwenhoek klawe simple linear time algorithm concave dynamic programming technical report vancouver canada canada lee schmidt wright improved simplified inapproximability information processing letters mahajan nimbhorkar varadarajan planar problem pages springer berlin heidelberg berlin heidelberg nielsen nock optimal interval clustering application bregman clustering statistical mixture learning ieee signal process pennacchioli coscia rinzivillo giannotti pedreschi retail market complex system epj data science schieber computing minimum path graphs concave monge property journal algorithms vattani requires exponentially many iterations even plane discrete computational geometry wang song optimal fast univariate clustering package version wang song ckmeans optimal clustering one dimension dynamic programming journal wilber concave subsequence problem revisited journal algorithms yao efficient dynamic programming using quadrangle inequalities proceedings twelfth annual acm symposium theory computing stoc pages new york usa acm
8
copyright notice maximum persistency via iterative relaxed inference graphical models feb alexander shekhovtsov paul swoboda bogdan savchynskyy consider problem undirected discrete graphical models propose polynomial time practically efficient algorithm finding part optimal solution specifically algorithm marks labels considered graphical model either optimal meaning belong optimal solutions inference problem provably belong solution access exact solver linear programming relaxation problem algorithm marks maximal possible specified sense number labels also present version algorithm access suboptimal dual solver still ensure optimality marked labels although overall number marked labels may decrease propose efficient implementation runs time comparable single run suboptimal dual solver method shows results computational benchmarks machine learning computer vision index partial optimality relaxation discrete optimization wcsp graphical models energy minimization ntroduction consider energy minimization maximum posteriori map inference problem discrete graphical models common pairwise case form fuv min minimization performed vectors containing components notation detailed problem numerous applications computer vision machine learning communication theory signal processing information retrieval statistical physics see overview applications even binary case coordinate assigned two values problem known also hard approximate hardness problem justifies number existing approximate methods addressing among solvers addressing linear programming relaxations particular dual count among versatile efficient ones however apart notable exceptions see overview related work approximate methods guarantee neither optimality solutions whole even optimality individual solution coordinates solution returned approximate method optimal one guarantee coordinate contrast method provides guarantees coordinates precisely component eliminates alexander shekhovtsov institute computer graphics vision icg graz university technology inffeldgasse graz austria shekhovtsov paul swoboda discrete optimization group institute science technology austria campus klosterneuburg austria pswoboda bogdan savchynskyy computer vision lab faculty computer science institute artificial intelligence dresden university technology dresden germany values henceforth called labels provably belong optimal solution call eliminated labels persistent single label remain implies optimal solutions holds label called persistent optimal elimination method polynomial applicable approximate solver dual linear programming relaxation problem employed subroutine related work trivial essential observation method identifying persistency based tractable sufficient conditions order avoid solving problem elimination methods dee verify local sufficient conditions inspecting given node immediate neighbors time label node substituted another one energy configurations neighbors increase label eliminated without loss optimality similar principle eliminating interchangeable labels proposed constraint programming generalization related problem weighted constraint satisfaction wcs known dominance rules soft neighborhood substitutability however wcs general considers bounded operation condition appears intractable therefore weaker sufficient local conditions introduced way selects local substitute label using equivalence preserving transforms related method use approximate solution based dual relaxation tentative substitute test labeling although local character dee methods allows efficient implementation also significantly limits quality number found persistencies shown considering global criteria may significantly increase algorithm quality roof dual relaxation quadratic optimization qpbo equivalent pairwise energy based truncated model wuv min potts model min graph cut based courtesy kovtun instance available instance used alahari kovtun method shekhovtsov kovtun method kohli mqpbo swoboda fig progress partial optimality methods top row corresponds stereo model potts interactions large aggregating windows unary costs used instance published bottom row refined stereo model truncated linear terms instance hashed red area indicates optimal persistent label pixel found labels might eliminated solution completeness given percentage persistent labels graph cut based methods fast efficient strong unary terms methods able determine larger persistent assignments extremely slow prior work tion binary variables property variables integer relaxed solution persistent several generalizations roof duality energies proposed mqpbo method generalized roof duality extend roof duality case reducing problem binary variables generalizing concept submodular relaxation respectively although binary pairwise energies methods provide good computational efficiency number found persistencies efficacy drops number label grows auxiliary submodular problems proposed sufficient persistency condition multilabel energy minimization case potts model method efficient specialized algorithm although methods shown good efficacy certain problem classes appearing computer vision number persistencies find drastically decreases energy strong unary terms see fig contrast methods technically rely either local conditions computing maximum flow works proposed persistency approaches relying general linear programming relaxation authors demonstrated applicability approach problems utilizing existing efficient approximate algorithms problems addressed using windowing technique despite superior persistency results running time methods remained prohibitively slow practical applications illustrated example fig methods achieve superior results practice even theoretically guaranteed proven method method problem determining maximum number persistencies formulated polynomially solvable linear program guaranteed find provably larger persistency assignment mentioned approaches however solving linear program large scale instances numerically applying multiple local windows prohibitively slow poses challenge designing method would indeed practical contribution work propose method solves maximum persistency problem therefore delivers provably better results methods similar method requires iteratively approximately solve linear programming relaxation subroutine however method significantly faster due substantial theoretical algorithmic elaboration subroutine demonstrate efficiency approach benchmark problems machine learning computer vision outperform competing methods terms number persistent labels method speed scalability randomly generated small problems show set persistent labels found using approximate solver close maximal one established costly scalable method present paper revised version besides reworked explanations shortened clarified proofs one new technical extension general dual algorithm termination guarantees larger class approximate solvers ork overview section serves overview method give general definitions formulate maximum persistency problem briefly describe generic method solve description equipped references subsequent sections serve road map rest paper autarky property applied whole search space obtain image potentially smaller search space containing optimal labelings follows restrict substitutions defined locally node indeed already class substitutions covers existing persistency methods fig dead end elimination dominance variables shown boxes possible labels circles label substituted label configuration neighbors energy increase terms inside contribute difference label eliminated without loss optimality general substitution consider applied labels nodes simultaneously illustrated fig two variables obtain following principle identifying persistencies proposition strictly improving substitution optimal solution must satisfy fig simultaneous substitution labels two variables labels arrow tails substituted labels arrow heads joint configuration dashed substituted configuration solid notation problem assume directed graph set nodes set edges let denote ordered pair stands set neighbors node associated variable taking values finite set labels cost functions potentials fuv associated nodes edges respectively let constant term introduce sake notation finally stands cartesian product elements called labelings represent potentials energy single cost vector set enumerates components terms improving substitutions formulate persistency method framework strictly improving substitutions called improving mappings previous works shown existing persistency techniques expressed improving substitutions mapping called substitution idempotent definition substitution called strictly improving cost vector example let consider elimination dee test whether given label single node fig substituted another one fig change energy substitution depends configuration neighbors value change additive neighbors verified whether substitution always improves energy label eliminated test repeated different label reduced problem strictly improving substitution applied labeling guaranteed equal better energy particular strictly improving substitutions generalize strong indeed otherwise contradiction idempotency implies label persistent excluded consideration verification problem verifying whether given substitution strictly improving hard decision problem order obtain polynomial sufficient condition first rewrite energy minimization problem relax end reformulate definition optimization form proposition substitution strictly improving iff min minimizers proof indeed condition equivalent sufficiency minimizer necessity therefore definition follows condition holds minimizer moreover minimizer must definition follows show difference energies represented pairwise energy appropriately constructed cost vector holds therefore according proposition verification strictly improving property reduces minimizing energy checking fulfilled make verification problem tractable relax min minimizers tractable polytope integer vertices correspond labelings standard relaxation use defined relaxed labeling appropriately defined extensions discrete functions defined construction objective value matches exactly integer labelings sufficient hold sufficient condition made precise definition means strictly improves integer labelings also relaxed labelings therefore substitutions called strictly relaxedimproving cost vector assuming fixed context let denotes set substitutions satisfying class substitutions maximizing substitutions tractable maximizing following restricted class assume given test labeling case approximate solution map inference consider substituting node subset labels definition substitution class substitutions exist subsets see figs examples note class rather large possible choices existing methods partial optimality still represented using particular methods represented using constant test labeling restriction class allows represent search substitution eliminates maximum number labels one largest inclusion sets substituted labels allows propose relatively simple algorithm cutting plane algorithm algorithm cutting plane method general sense maintain substitution iterations better equal solution achieve feasibility iteratively constraining initialization define substitution sets substitutes everything clearly maximizes objective verification check whether current strictly relaxedimproving solving relaxed problem yes return optimal relaxed solution corresponds violated constraint cutting plane assign substitution defined largest sets yvt constraints satisfied repeat verification step steps illustrated fig clear algorithm stops substitution strictly improving although could identity map maximum persistency substitutions maximum persistency approach consists finding substitution eliminates maximal number labels max fig steps discrete algorithm starting substitution maps everything test labeling red crossed labels would eliminated passes sufficient condition relaxed solution violating sufficient condition found black substitution pruned eliminate labels exact specification cutting plane step derived shown algorithm solves maximum persistency problem optimally work outline give precise formulation relaxed condition components specify details algorithm prove optimality results hold general relaxation require solve linear programs precisely rest paper devoted approximate solution problem finding relaxed improving mapping almost maximum consider specifically standard relaxation reformulate algorithm use dual solver problem gradually relax requirements optimality dual solver keeping persistency guarantees propose several theoretical algorithmic tools solve series verification problems incrementally overall efficiently finally provide exhaustive experimental evaluation clearly demonstrates efficacy developed method elaxed mproving ubstitutions overcomplete representation section formally derive strictly sufficient condition obtain relaxation use standard lifting approach overcomplete representation labeling represented using encoding lifting allows linearize energy function substitution consequently relaxed improving substitution criteria lifting defined mapping iverson bracket equals true otherwise using lifting linearize unary terms similarly pairwise terms allows linearize energy function write scalar product energy minimization problem written min min min conv convex hull labelings lifted space also known marginal polytope last equality uses fact minimum linear function finite set equals minimum convex hull expression equivalent reformulation energy minimization problem linear program however generally intractable polytope relaxed labels mapped indicator label adjoint operator acts follows similarly due puv fuv puv fuv fuv fuv strictly improving substitutions let denote identity mapping using proposition linear extension obtain substitution strictly improving iff value min min lifting substitutions next show substitution represented linear map lifted space allow express term linear function hence also represent criterion proposition given substitution let defined action cost vector follows fuv definition substitution strictly cost vector shortly strictly minimizers proof let follows expressed scalar product since equality holds follows expression allows write energy substituted labeling reason mapping called linear extension denoted symbol following example illustrates looks coordinates example consider substitution depicted fig defined relaxed labeling structure linear extension written matrix puv defined puv shaped matrix action block expresses zero minimizers note problem form energy minimization cost vector introduced sufficient condition persistency obtained relaxing intractable marginal polytope tractable outer approximation xuv satisfies puv minh min defined polytope standard relaxation arguments general require since includes integer labelings sufficient condition improving substitution hence persistency corollary substitution strictly strictly improving problem called verification decision problem test verify conditions called verification problem eneric ersistence lgorithm structure shown maximum persistency problem class substitutions formulated single linear program substitution represented using auxiliary continuous variables take different approach based observing structure improving substitutions throughout section assume test labeling fixed let compare two substitutions sets labels eliminate substitution eliminates labels equivalently labels definition substitution better equal substitution denoted holds proposition let partially ordered set strictly substitutions maximum let denoted unique solution proof since maximum holds definition thus optimal therefore additionally holds therefore unique solution existence maximum formally follow correctness proof algorithm theorem stronger claim necessary analysis may provide better insight lattice isomorphic lattice sets union intersection operations seen follows composition verified chaining inequalities composition satisfies property identified join least shown hold also structure allows find maximum relatively simple algorithm generic algorithm generic primal algorithm displayed algorithm represents substitution sets labels substituted via line initializes sets labels line constructs cost vector verification condition lines solve verification test whether sufficient conditions satisfied via following reformulation proposition given substitution let denote set minimizers verification support set optimal solutions node iff proof corollary substitution defined holds iff remainder paper relate notation set optimal solutions general face need determine whether optimal solution exists coordinate strictly positive theory linear programming known one takes optimal solution relative interior face relative interior excludes vertices edges support set points matches therefore practically feasible find support sets using single solution found interior method known converge central point optimal face methods based smoothing obtaining exact solution methods may become computationally expensive size inference problem grows despite algorithm implementable defines baseline practically efficient variants solving approximately developed paper since line algorithm verifies precisely condition corollary algorithm terminates soon algorithm iterative pruning input cost vector test labeling output maximum strictly improving substitution true construct verification problem potentials defined return pruning substitutions hence strictly improving opposite case line prunes sets removing labels corresponding support set optimal solutions verification identified violate sufficient condition labels may part optimal solution eliminated complete analysis algorithm remains answer two questions terminate optimal maximum persistency problem proposition algorithm runs polynomial time returns substitution proof discussed sets line found polynomial time every iteration algorithm terminated yet least one sets strictly shrinks seen comparing termination condition line pruning line therefore algorithm terminates iterations termination corollary theorem substitution returned algorithm maximum thus solves proof noteworthy algorithm used solve problem polytope satisfying relaxation expressed lifted space moreover order use algorithm higher order models one needs merely straightforwardly generalize linear extension done test labeling chosen using approximate solution via zeroth iteration algorithm picking choice motivated fact strict substitution eliminate labels support set optimal solutions relaxation thus labels may substituted anything else comparison previous work substitutions related expansion move algorithm following sense seeks improve single current labeling calculating optimized crossover fusion candidate labeling seek labels moved guaranteed improvement possible labelings algorithm similar structure later finds improving substitution small class incrementally shrinking set potentially persistent variables specifically given test labeling class substitutions contains substitutions every node either replace labels leaves labels unchanged two possible choices either identity methods explained finding improving mapping class generalize method substitutions original sufficient condition persistency extend substitutions even substitutions generally weaker condition unless special reparametrization applied criterion extends general substitutions depend reparametrization similarly use approximate dual solvers general setting problem formulated one big linear program solve problem hence also problem combinatorial fashion variables defining substitution may seem solving series linear programs rather single one disadvantage proposed approach however show proposed iterative algorithm implemented using dual possibly suboptimal solver relaxed verification problem turns much beneficitial practice since verification problems incrementally updated iteration iteration solved overall efficiently approach achieves scalability exploiting available specialized approximate solvers relaxed map inference essentially dual approximate solver used black box method mode relies persistency problem reduction introduced intermediate steps considered right defining standard relaxation dual relaxation consider standard local polytope relaxation energy minimization problem given following pair primal minhf dual max fuv abbreviates fuv fuv ersistency ual olvers though algorithm quite general practical use limited strict requirements solver must able determine exact support set optimal solutions however finding even single solution relaxed problem standard methods like simplex interior point practically infeasible one switch specialized solvers developed problem although scalable algorithms based smoothing techniques converge optimal solution waiting convergence iteration algorithm make whole procedure impractical general would like avoid restricting certain selected solvers able choose efficient one given problem standard relaxation introduced number primal variables grows quadratically number labels number dual variables grows linearly therefore desirable use solvers working dual domain including suboptimal ones performing descent offer performance limited time budget furthermore fast parallel versions methods developed run making approach feasible vision applications switch dual verification gradually relax requirements solution returned dual solver done following steps optimal dual solution arc consistent dual point dual point main objective ensure cases found substitution strictly improving possibly compromising maximality final practical algorithm operating constraints primal problem define local polytope cost vector called reparametrization holds cost equivalence well see using reparametrization dual problem briefly expressed max note feasible value lower bound primal problem follows assume additionally satisfies following normalization mini automatically minij fuv satisfied optimal solution expressing dual domain let pair primal dual optimal solutions complementary slackness know respective dual constraint holds equality case say active set active dual constraints matches sets local minimizers reparametrized problem argmin complementary slackness obtain inclusion insufficient exact reformulation algorithm however sufficient correctness make sure termination substitution displace labels corollary follows always exists optimal primal solution dual satisfying strict complementarity case relation becomes equivalence algorithm iterative pruning arc consistency input cost vector test labeling output strictly improving substitution true construct verification problem defined use dual solver find arc consistent return pruning substitutions case relative interior points optimal primal resp optimal dual faces relative interior optimal set constraints satisfied equalities smallest depend specific choice strict complementarity turns equality allows compute exact maximum persistency using dual algorithm without reconstructing primal solution however finding appears difficult efficient dual ascent solvers convergence guarantees see allowing find solution satisfying arc consistency definition reparametrized problem called arc consistent fuv active follows active active follows exists fuv active optimal dual solution need arc consistent reparametrized without loss optimality enforce arc consistency labels become inactive procedure support set primal solutions general following holds proposition arc consistency necessary condition relative interior optimality arc consistent proof property favor since ideally interested equality next propose algorithm utilizing arc consistent solver prove guaranteed output persistency arc consistency solver propose algorithm based dual solver attaining arc consistency condition differences algorithm underlined dual solver line finds relative interior optimal solution algorithm solves exactly otherwise suboptimal need reestablish correctness termination lemma termination algorithm algorithm termip nates iterations proof case return condition line satisfied pruning line excludes least one label minj guv min guv fig illustration reduction labels displaced hence associated unary pairwise costs zero case indicated pairwise costs replaced minimum case value guv decreased assuming reductions type symmetric counterparts already performed amount decrease matches value mixed derivative associated paired lemma correctness algorithm holds arc consistent dual vector optimal proof follows algorithm terminates found arc consistent solution optimal case inclusion satisfied found substitution guaranteed solvers converging arc consistency one see arc consistency required termination algorithm intermediate iterations may well perform pruning step line without waiting solver converge motivates following practical strategy perform number iteration towards finding arcconsistent dual point check whether labels prune terminate arc consistent nothing prune otherwise perform iterations towards arc consistency solver guaranteed eventually find arc consistent solution overall algorithm either terminate arc consistent lemma optimal labels prune however face question happens dual solver find arc consistent solution finite time case algorithm iterating infinitely pruning available time guarantee pruning step occur point thus simply terminate algorithm get persistency guarantees even dual solver guaranteed converge finite number iterations principle possible time needed pruning succeed would proportional time convergence making whole algorithm slow instead desirable guarantee valid result allowing fixed time budget dual solver overcome difficulty help reduced verification presented next erification roblem eduction algorithms iteratively solve verification problems replace verification solved step simpler reduced one without loss optimality algorithms definition let cost vector verification reduced cost vector defined guv guv minj min guv reduction illustrated fig taking account guv reduction interpreted forcing inequality guv guv guv guv mixed discrete derivatives fourtuples cost vector therefore partial submodular truncation recall algorithm iterations prunes substitutions belong based solutions verification following theorem reestablishes optimality step reduction theorem reduction let corresponding reduced cost vector constructed def let also iff proof procedure dual correct minij minj mini mini return normalize terminate lines neither occurs certain number iterations stopping condition line pruning based currently active labels executed line cost vector rebuilt dual solver continues last found dual point warm start explained next section critical overall correctness focus new termination conditions lines correction step line introduced whose purpose move slacks pairwise terms unary terms active labels become decisive procedure defined procedure correction intermixed dual updates serves proxy solver termination conditions following property lemma output procedure feasible satisfies min theorem corollary follows iff support sets optimal solutions xuv min min reduced verification moreover input feasible lower bound decrease therefore valid algorithms consider proof line procedure moves constant edge reduced prune substitutions satisfy zero node turns minimum terms guv property optimal relaxed solutions lines turn zero minimal pairwise value attached support sets general differ original label provides line provides case verification however purpose algorithm feasibility initial implies values equivalent replacement potentially affecting order increase steps hence unary potentials substitutions pruned remain therefore step decrease reduction following advantages lower bound value subsets labels contracted single repaccording lemma procedure worsen lower resentative label associated unary pairwise bound attained dual solver following theorem guarantees costs equal pruning possible corrected dual point allow see relax requirements constitutes optimal solution ensuring persistency proximate dual solvers needed establish termination theorem let dual point reduced problem correctness algorithm useful speed heuristics particular satisfying either dual optimal primal optimal easier find labeling negative cost since decreased many edge costs shown labeling allows early stopping dual solver proof assume hold pruning substitution without loss maximality let pick node label ensured edge label similarly exists ersistency inite umber ual partial submodularity pdates assume suboptimal dual solver iterative represented procedure given current dual point makes step resulting new dual point guess primal integer solution setting propose algorithm inner loop algorithm calls line checks whether speedup shortcut available line verifies whether already therefore hence active therefore dual point satisfy complementarity slackness conditions hence dual optimal algorithm efficient iterative pruning input problem test labeling output improving substitution set initial dual solution available true apply single node pruning construct reduced verification current sets according definition repeat apply pruning cut goto step rebuild verification optimality return stopping condition iteration limit prune procedure input cost vector dual point output new dual point approximate primal integer solution loss optimality algorithm lemma suggests solve simpler verification subset guarantee remove substitutions implies one switch afterwards much efficient optimization lemma provide two examples efficient procedures lemma let defined depends let let proof note theorem necessary sufficient pruning lemma sufficient pruning negative labelings assume found integer labeling lemma gives answer nodes label pruned set without loss optimality define following restriction polytope polytope corresponds restriction label set node according lemma need solve problem theorem termination correctness algorithm stopping condition line algorithm terminates outer iterations returns proof algorithm yet terminated pruning guaranteed possible compare conditions lines iteration limit follows algorithm terminates theorem follows dual optimal hence therefore sufficient strictly according corollary prove similar result holds trws iteration without correction arguing complete chain subproblems instead individual nodes correction might needed case algorithm keep slacks nodes srmp stopping condition line algorithm controls aggressiveness pruning performing fewer iterations may result found maximum case guaranteed algorithm stall identifies correct persistency case solver convergence optimality guarantees time budget controls degree approximation maximum persistency peed ups inference termination without loss maximality next propose several sufficient conditions quickly prune substitutions without worsening final solution found algorithm follows definition existence labeling sufficient prove substitution strictly hence one could consider updating current substituttion without waiting exact solution inference problem line tricky part find labels pruned without exclude due partial submodularity problem submodular solved algorithms found energy necessarily nodes hold therefore pruning take place single node pruning let consider single node polytope special case differ single node case problem amounts calculating value must excluded single node pruning applied pairs exhaustively efficient keep track nodes sets changed either due negative labeling pruning active labels pruning line single node pruning check neighbors efficient message passing main computational element dual coordinate ascent solvers like trws mplp passing message update form fuv many practical cases message passing computed time linear number labels case fuv convex function minimum functions potts model min however algorithm need solve problem cost vector resp apply reduction turns whenever fast message passing method holds algorithm iterative relaxed inference kovtun mqpbo ing cplex algorithm using initial solution uses iterations method converged speedups method clpex method single formulation maximum strong persistency solved cplex method kovtun multilabel qpbo mqpbo random permutations accumulating persistency table list evaluated methods theorem fast message passing message passing edge term reduced fuv time proof table complexity proportional size sets labels pruned sets course algorithm less work required note contrary limiting number iterations dual solver described speedups presented section sacrifice persistence maximality experiments instances algorithm finished ever reaching step cases found substitution maximum xperimental experiments study well approximate maximum persistency table illustrate contribution different speedups table give overall performance comparison larger set relevant methods table provide detailed direct comparison relevant scalable method using exact approximate solvers table measure persistency use percentage labels eliminated improving substitution random instances table gives comparison random instances generated small problems grid uniformly distributed integer potentials full model potts type potts model seen exact algorithm performs identically formulation although solves series lps opposed single solved scales better larger instances instances size formulation already difficult cplex takes excessive time sometimes returns computational error performance dual algorithm confirms loose little terms persistency gain significantly speed implementation method available http benchmark problems table summarizes average performance opengm mrf benchmark datasets include previous benchmark instances computer vision protein structure prediction well models literature results per instance given speedups experiment report much speed improvement achieved subsequent technique evaluation table starts basic implementation using warm start solver allowed run iterations partial optimality phase pruning attempted expect datasets percentage persistent labels improves apply speedups since without loss maximality discussion tables demonstrate using suboptimal dual solver closely approximates maximum persistency method also significantly faster scales much better method closest contender terms algorithm design tables clearly show method determines larger set persistent variables holds true exact cplex well approximate trws solvers two reasons discusssed first optimize larger set substitutions identify persistencies limited persistencies second even case persistencies criterion general weaker depends initial reparametrization problem later difference matter potts models examples figs matter fig although method searches significantly larger space possible substitutions needs fewer iterations due speedup techniques details iteration counts found comparison running time taken account different methods optimized different degree nevertheless clear algorithmic speedups crucial making proposed method much practical maintaining high persistency recall quality provide insights numbers reported illustrate figs interesting cases fig shows hardest instance family identified persistencies allow fix single label pixels pixels one possible label remains remainder problem reduced search space passed solvers tsukuba image fig interesting appeared many previous works performance based persistency methods relies much strong unary costs proposed method robust fig shows easy example relaxation tight dual solver finds optimal labeling verification confirms solution unique fig show hard instance partial reason hardness integer costs leading optimal solution fig instance report solution completeness correspond trivial forbidden labels problem big unary costs time methods perform even worse problem problem family table performance evaluation random instances problem family size type potentials number labels average performance samples given allow precise comparison methods initialized test labeling found relaxation closely approximates matches scales much better problem family mqpbo kovtun proteinfolding table performance opengm benchmarks columns denote number instances labels variables respectively method average instances family reported result available memory implementation limitation instance initialization extra time persistency speedups pruning pruning msgs protein folding table evaluation speedups selected examples computational time drops left right add techniques described example final time persistency fraction initialization time example times initialization persistency comparable speedups also help improve persistency based exact criteria instance table comparison using exact approximate solvers examples hard proteinfolding instances initialization persistency time given occasionally better persistency explained different test labelings produced cplex solvers unlike table results wold identical proven verified random instances hard interaction constraints seems hard constraints ambiguous solutions pose difficulties methods including onclusions utlook presented approach find persistencies problem employing solvers convex relaxation using suboptimal solver relaxed problem still correctly identify persistencies whole approach becomes scalable method exact solver matches maximum persistency suboptimal solver closely approximates outperforming state art persistency techniques speedups developed allow achieve reasonable computational cost making method much practical works build fact approach takes approximate solver like turns method partial optimality guarantees reasonable computational overhead believe many presented results extended higher order graphical models tighter relaxations practical applicability approximate solvers explored research direction seems promising mixing different optimization strategies persistency cutting plane methods acknowlegement alexander shekhovtsov supported austrian science fund fwf start project bivision paul swoboda bogdan savchynskyy supported german research foundation dfg within program graphical models applications image analysis grant grk bogdan savchynskyy also supported european research council erc european unions horizon research innovation program grant agreement eferences opengm benchmark http kovtun min remainder fig performance hard segmentation problem remainder problem visualizes pixels result improved method potts model fig examples easy problem trws finds optimal solution zero integrality gap therefore method well report remainder fig examples hard stereo problem method kovtun gives therefore displayed result potts model adams lassiter sherali persistency polynomial programming mathematics operations research aggarwal klawe moran shor wilber geometric applications algorithm algorithmica alahari kohli torr reduce reuse recycle efficiently solving mrfs cvpr boros hammer optimization discrete applied mathematics boykov veksler zabih fast approximate energy minimization via graph cuts ieee trans pattern anal mach choi rutenbar hardware implementation mrf map inference fpga platform field programmable logic pages ieee cooper givry sanchez schiex zytnicki werner soft arc consistency revisited artificial intelligence givry prestwich sullivan deadend elimination weighted csp schulte editor volume lecture notes computer science pages springer desmet maeyer hazes lasters elimination theorem use protein positioning nature freuder eliminating interchangeable values constraint satisfaction problems proceedings ninth national conference artificial intelligence volume aaai pages aaai press globerson jaakkola fixing convergent message passing algorithms map nips goldstein efficient rotamer elimination applied protein related spin glasses biophysical journal gridchyn kolmogorov potts model parametric maxflow functions iccv hirata unified algorithm computing distance maps information processing letters hurkat choi nurvitadhi rutenbar fast hierarchical implementation sequential treereweighted belief propagation probabilistic inference field programmable logic ilog ilog cplex software mathematical programming optimization see http kappes andres hamprecht nowozin batra kim kausler lellmann komodakis savchynskyy rother comparative study modern inference techniques structured discrete energy minimization problems international journal computer vision pages kohli shekhovtsov rother kolmogorov torr partial optimality mrfs icml kolmogorov convergent message passing energy minimization pami kolmogorov generalized roof duality bisubmodular functions discrete applied mathematics kolmogorov reweighted message passing revisited arxiv kolmogorov zabin energy functions minimized via graph cuts pami kovtun partial optimal labeling search subclass max problems pages kovtun sufficient condition partial optimality max labeling problems usage control systems computers special issue lecoutre roussel dehani wcsp integration soft neighborhood substitutability milano editor volume lecture notes computer science pages springer shekhovtsov huber complexity discrete energy minimization problems european conference computer vision pages meijster roerdink hesselink general algorithm computing distance transforms linear time pages springer boston probabilistic inference challenge http kovtun remainder fig example hard instance photomontage number views covering pixel usually smaller total number views number labels method mostly originate elimination redundant labels result gives visually kovtun improves due choosing optimal reparametrization note partial labeling part one label remaining larger rother kolmogorov lempitsky szummer optimizing binary mrfs via extended roof duality cvpr savchynskyy kappes schmidt study nesterov scheme lagrangian decomposition map labeling cvpr savchynskyy schmidt kappes efficient mrf energy minimization via adaptive diminishing smoothing uai schlesinger antoniuk diffusion algorithms structural recognition optimization problems cybernetics sys shekhovtsov exact partial energy minimization computer vision phd thesis cmp czech technical university prague shekhovtsov maximum persistency energy minimization cvpr shekhovtsov maximum persistency energy minimization technical report graz university technology shekhovtsov higher order maximum persistency comparison theorems cviu accepted shekhovtsov reinbacher graber pock solving dense image matching using discretecontinuous optimization computer vision winter workshop page shekhovtsov swoboda savchynskyy maximum persistency via iterative relaxed inference graphical models cvpr shekhovtsov swoboda savchynskyy maximum persistency via iterative relaxed inference graphical models corr shlezinger syntactic analysis visual signals presence noise cybernetics systems analysis see review swoboda savchynskyy kappes partial optimality via iterative pruning potts model ssvm swoboda savchynskyy kappes partial optimality pruning general graphical models cvpr swoboda shekhovtsov kappes schnorr savchynskyy partial optimality pruning mapinference general graphical models pami szeliski zabih scharstein veksler kolmogorov agarwala tappen rother comparative study energy minimization methods markov random fields priors pami vanderbei linear programming foundations extensions department operations research financial engineering princeton university wainwright jordan graphical models exponential families variational inference found trends mach wang zabih preprocessing techniques markov random field inference ieee conference computer vision pattern recognition cvpr werner linear programming approach problem review pami windheuser ishikawa cremers generalized roof duality optimization optimal lower bounds persistency eccv yanover weiss minimizing learning energy functions prediction jour comp zach principled approach map inference cvpr ppendix roofs proposition arc consistency necessary condition relative interior optimality arc consistent proofs generic algorithms proof condition implies satisfies strict complementarity primal optimal solution strict complementarity implies feasibility must hold using complementary slackness must fuv similarly second condition arc consistency verified follows arc consistent proposition given substitution let denote set minimizers verification support set optimal solutions node iff proof direction let assume contradiction since exists image due evaluating extension contradicts direction let clearly holds support set given hence remains show value minimum zero objective vanishes proofs reduction proof reduction theorem lemma used heuristics requires several intermediate results recall correct pruning done guarantee preserve strictly improving substitutions assuming therefore statements section formulated pairs consider adjustments cost vector preserve set strictly improving substitutions adjustments general preserve optimal solutions associated relaxation theorem substitution returned algorithm maximum thus solves lemma let iff proof let since holds implies therefore proof two following lemmas form basis proof lemma thm let let line algorithm case substitutions class statement additionally simplifies follows assume equality ensures iff theorem follows definition reformulate condition use following dual characterization corollary assume conditions lemma additionally let theorem characterization let proof follows similarly corollary assume contradiction since exists image due evaluating extension contradicts statement lemma iff exists reparametrization lemma let denote substitution computed line algorithm iteration algorithm maintains invariant following lemma assumes arbitrary substitution necessarily takes input sets subsets immovable labels context theorem use proof prove induction statement holds trivially first iteration assume true current iteration holds therefore corollary applies show line prunes substitutions follows let substitution next iteration computed line pruning line assume contradiction negating definition expanding lemma reduction substitution let let guv let defined min guv min guv otherwise iff proof direction let verify following inequality xuv guv guv pruned line must contradicts therefore need consider cases guv let remaining case symmetric case substituting prove however case fails hold contradicts assumption induction therefore holds induction every iteration proposition algorithm terminates returns substitution lemma returned substitution satisfies maximum guv min guv min guv left hand side zero respective components zero assumption time right hand side proved total satisfies inequalities theorem follows strict inequality case considered similarly lemma since inequality holds implies multiplication pairwise components using equality unary components theorem reduction let corresponding reduced cost vector constructed def let also iff note cost vector satisfying called auxiliary inequality equivalent proof let lemma iff need consider pairwise terms let since necessarily let defined using sets reduction composed reductions lemma lemma guv conditions lemma satisfied obtain part reduction cases let denote reduced vector applying lemma obtain defined whenever left hand side strictly positive right hand side therefore follows direction assume theorem exist dual multipliers verifies inequality components xuv guv guv let expand pairwise inequality case let using obtain lemma let defined depends let let guv guv guv guv proof let assume contradiction case theorem since holds follows feasible solution better cost contradicts optimality must therefore claim follows termination arc consistent solvers take min sides min guv min guv finally subtract sides obtain theorem consider verification defined let reparametrization let least one two conditions satisfied dual optimal case symmetric remaining cases total satisfies inequalities theorem proof assume hold node let chose label arc consistency edge label guv active similarly exists guv active construction guv xuv therefore following modularity equality holds shown remains prove inequality holds strictly since holds hqt necessary least one unary pairwise inequalities support holds strictly case inequality also strict lemma reduction substitution cost vector let zero unary components pairwise components read max guv guv guv guv guv guv active guv guv guv guv iff guv guv proof scheme proof similar lemma unary components equal show inequality implication follow lemma inequality reduces adding obtain hence active therefore dual point satisfy complementarity slackness hence optimal lemma correctness algorithm holds arc consistent dual vector optimal due idempotency left hand side identically zero therefore inequality verified direction assume theorem exist dual multipliers satisfying inequalities consider proof corollary theorem fast message passing theorem fast message passing message passing edge term reduced fuv time let show inequalities hold clearly hold unary components pairwise components let let must let denote guv guv guv guv let guv follows proof components reduced problem expressed directly components table passing message edge amounts calculating vector substituting pairwise terms expands fuv fuv minj fuv fuv min fuv fuv table components reduced verification problem since message equal sufficient represent recall substituting pairwise terms denoting fuv min min fuv min adding inside zero grouping together obtain min min expression message passing minimum result needed message computed time using algorithms see also evaluating takes additional time minimum takes time assuming components equal already true ppendix esults nstance instance algorithm time needed overall time initial solution mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo proteinfolding iterations iterations algorithm trws logarithmic percentage partial optimality percentage excluded labels instance algorithm time needed overall time initial solution mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo logarithmic percentage partial optimality iterations iterations algorithm trws percentage excluded labels instance algorithm time needed overall kovtun mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo kovtun mqpbo mqpbo mqpbo fourcolors snail time initial solution logarithmic percentage partial optimality percentage excluded labels iterations iterations algorithm trws table detailed experimental evaluation algorithm utilising cplex subsolver denoted algorithm utilising subsolver denoted counterparts denoted mqpbo run one iteration predefined label order denoted mqpbo run iterations random label orders denoted
8
approximating geodesics via random points nov erik davis sunder sethuraman abstract given cost functional paths domain form interest approximate minimum cost geodesic paths let points drawn independently according distribution density form random geometric graph points connected length scale vanishes suitable rate general class functionals associated finsler distances using probabilistic form gamma convergence show minimum costs geodesic paths respect types approximating discrete cost functionals built random geometric graph converge almost surely various senses corresponding continuum cost number sample points diverges particular geodesic path convergence shown appears among first results kind introduction understanding shortest geodesic paths points medium intrinsic concern diverse applied problems optimal routing networks disordered materials identifying manifold structure large data sets well studies probabilistic models since seminal paper recent survey see also consider percolation continuum settings sometimes abstract formulas geodesics calculus variations differential equation approaches instance respect patch riemannian manifold tensor field known distance function fixed viscosity solution eikonal equation boundary condition kvka avi standard innerproduct geodesic connecting may recovered solving descent equation scalar function controlling speed hand computing numerically distances geodesics may complicated issue one standard approaches fast marching method approximate distance solving eikonal equation regular grid points method extended variety ways including respect triangulated domains well irregular samples euclidean submanifold see also contexts review mathematics subject classification key words phrases geodesic shortest path distance consistency random geometric graph gamma convergence scaling limit finsler erik davis sunder sethuraman alternatively variants dijkstra heat flow methods graphs approximating space sometimes used dijkstra algorithm distances shortest paths found successively computing optimal routes edges heat flow methods geodesic distances found terms small time asymptotics heat kernel space instance see another idea collect random sample points manifold embedded put network structure points say terms geometric neighbor graph approximate continuum geodesics lengths lengths discrete geodesic paths found network presumably assumptions points sampled random graphs formed number points diverge discrete distances converge almost surely continuum shortest path lengths statistical consistency result fundamental manifold learning instance popular isomap procedure based notions elicit manifold structure data sets specifically let subset corresponding patch manifold consider kernel define path infimum costs paths example pth power euclidean distance respect class functions samples drawn distribution density papers address among results decrease increase respectively various concentration type bounds types discrete continuum optimal distances hold high probability leading consistent estimates instance graphs smooth certain density dependent estimators continuum distances considered decreasing smooth constant small bounded away work extends considered uniformly distributed samples hand among results graphs continuum distances increasing lipschitz bounded away approximated see also contexts purpose article twofold first identify general class different associated discrete distances formed random geometric graphs domain converge almost surely second describe associated discrete geodesic paths converge almost surely uniform hausdorff norms continuum geodesic paths type consistency appears among first contributions kind main results theorems corollary see section subsection precise statements related remarks consider following three different discrete costs first optimizes paths starting ending respectively linearly interpolated points consecutive points within time traverse link second optimizes respect quasinormal interpolations points using however paths third interpolate optimizes riemann sum approximating geodesics via random points cost number edges discrete path note discrete distances setting introduced density dependent versions used results discrete distance although natural seems well considered literature conditions impose include convexity ellipticity condition respect smoothness assumption away conditions include large class kernels associated finsler spaces well kernels considered respect graphs domain assumed bounded convex also assume rate decrease graph connected large main contribution article provide general setting discrete continuum convergences hold remark proof method quite different literature specific features important estimation distances easily generalized give probabilistic form gamma convergence derive almost sure limits may interest method involves showing liminf limsup compactness elements analysis context appropriate probability sets part output technique beyond giving convergence distances yields convergence minimizing discrete paths continuum geodesics various senses share different properties depending also instance invariant reparametrization path exactly also form may seen coercive modulus case fact case troublesome assumptions required theorems deal linear path cost riemann cost rougher quasinormal cost time contrast paths lie interval also problem somewhat degenerate invariance reparametrization costs turn nonrandom reduce integral also cost riemann sum converges integral finally comment difference viewpoint respect results continuum percolation riemann sum cost considered seems related different cost optimized works one optimizes cost path along random points origin given infers scaled distance law large numbers scale proportionality constant explicit contrast however article given already integral viewpoint optimize costs paths length order length scale points scaled order recover limit note also another difference remarked origin instead continuum percolation studies section setting assumptions results given respect three types discrete costs section proofs theorems corollary erik davis sunder sethuraman path level sets uniform points dicated figure continuum geodesic domain exp interpolating costs given section proofs theorems respect riemann costs interpolating costs given section technical results used course main proofs collected setting results introduce setting problem standing assumptions hold throughout article working subset closure open bounded convex domain therefore lipschitz domain corollary section consider points let denote space lipschitz paths given define cost associated optimal cost inf make following assumptions integrand continuous convex exists approximating geodesics via random points exist constants remark holds may extended function part reasoning assumptions include familiar kernel arclength path length line segment also assumptions known infimum attained path perhaps nonuniquely see proposition appendix addition remark additional differentiability assumptions represents finsler distance references therein cost interesting scaling property cost invariant smooth reparameterization paths given path smooth increasing one property allows deduce satisfies triangle property guaranteed let path path write path following time intervals respectively optimizing gives construct random geometric graph approximations geodesics made let sequence independent points identically distributed according distribution probability density let fix length scale respect realization define graph vertex set connecting edge iff refers euclidean distance say finite sequence vertices path edge let denote set paths assume certain decay rate namely lim sup log type decay rate almost surely large points connected path graph words nonempty indeed rate degree point graph diverge infinity see proposition appendix remarks section erik davis sunder sethuraman also assume underlying probability density uniformly bounded exists constant see figure parts depict geodesic path respect cost graph standing assumptions summarize assumptions dimension items decay rate density bound denoted standing assumptions hold throughout article next two subsections present results approximation geodesics respect two types schemes approximating costs built terms interpolations points also terms riemann sums interpolating costs introduce two types discrete costs based linear quasinormal paths linear interpolations respect realization let denote linear path given consider define concatenation linear segments segment traversed time precisely define lvi note resulting piecewise linear path define subset define random discrete cost words restriction noting taking form lvi lvi quasinormal interpolations define different discrete cost may nonlinearly interpolate among points paths say lipschitz path quasinormal respect exists known standard assumptions see proposition exists quasinormal path optimal follows refer approximating geodesics via random points quasinormal path connecting mean fixed optimal path denoted given path let denote concatenation segment uses time precisely define piecewise linear functions define subset let denote restriction respect path evaluate optimality segments also optimal sense inf inf infima lipschitz paths relations point remark kernels namely linear segments fact quasinormal geodesics example identifying kernels question long history going back hilbert whose problem paraphrased asks geometries geodesics straight lines surveys hamel criterion namely solution question see references therein also note mentioned introduction case degenerate min min random indeed let arg min suppose observe must nondecreasing otherwise one could build smaller cost path parts using invariance reparametrization violating optimality particular using changing variables argument yields min consider degenerate case erik davis sunder sethuraman first result linearly interpolated paths theorem suppose respect realizations probability set following holds minimum values costs converge minimum lim min min moreover consider sequence optimal paths arg min subsequence subsequence converges uniformly limit path arg min lim sup addition unique minimizer whole sequence converges uniformly case requires development addressed assumptions theorem address quasinormal interpolations theorem suppose either respect realizations probability set following holds minimum values energies converge minimum lim min min moreover consider sequence optimal paths arg min subsequence subsequence converges uniformly limit path arg min lim sup addition unique minimzer whole sequence converges uniformly remark certain ambiguity results theorem due invariance reparametrization paths case unique consider example case minimizer functional parameterization line course minimizers unique one way address formulate certain hausdorff convergence respect images paths given denote image consider hausdorff metric dhaus defined compact subsets dhaus max sup inf sup inf corollary suppose either consider paths large either form arg min arg min respect realizations probability set subsequence subsequence converges hausdorff sense arg min optimal path approximating geodesics via random points figure discrete path setting figure linearly interpolated visual clarity moreover unique reparametrization minimizer whole sequence converges lim dhaus riemann sum costs interpolating costs first introduce cost requires knowledge discrete points consequence applicable end subsection return linear interpolated costs define functional sense riemann sum approximation therefore behavior behavior minimizing paths similar see figure example optimal path make intuition rigorous establishing variants theorems respect cost given rougher nature however additional assumptions beyond standing assumptions helpful regard previous subsection results differ two cases define following smoothness condition lip exists note satisfies homogeneity condition uniformly bounded lip holds erik davis sunder sethuraman consider behavior analogue theorem corollary setting following theorem suppose addition satisfies lip respect realizations probability set minimum values energies converge minimum lim min min consider sequence optimal discrete paths arg min linear interpolations subsequence correspondingly subsequence linear paths converges uniformly limit path arg min discrete paths hausdorff sense unique minimizer whole sequence linear paths converges uniformly whole sequence discrete paths converges hausdorff sense need impose assumptions integrand state results case see examples satisfying conditions also subsection comments hilb say satisfies hilbert condition inf straight lines geodesics kernel trineq say satisfies triangle inequality pythag let consider points suppose constant dist line say satisfies pythagoras constant line line segment statement hilb kernel function fixed function following lemma case hamel criterion discussed previous subsection lemma given standing assumptions suppose also fixed positive definite hessian hilb satisfied proof fix quasinormal minimizer inf inf approximating geodesics via random points prop let satisfies equation words denotes hessian assumption positive definite hence parametrization straight line example class kernels satisfying standing assumptions additional conditions given following result recall denotes euclidean inner product lemma let strictly elliptic function kernel satisfies standing assumptions also lip hilb trineq pythag proof kernel clearly satisfies standing assumptions lip next fixed map satisfies conditions lemma satisfies hilb also trivially satisfies trineq show pythag case notation easier ideas carry general case consider right triangle joining line figure figure geometric argument used proof lemma respect pythag line segment connecting either either case pythag satisfied suppose line segment connecting triangle hypotenuse legs hence lengths less given min max max max max erik davis sunder sethuraman similar inequality max holds argument hence also need limit decay properties next result see subsection comments limitation namely suppose form max note condition form yields however max conjuction limits interval theorem suppose also satisfies lip hilb respect realizations probability set minimum values cost converge minimum lim min min moreover suppose addition satisfies trineq pythag satisfies consider sequence optimal discrete paths arg min linear interpolations subsequence subsequence linear paths converges uniformly limit path arg min discrete paths hausdorff sense unique reparametrization minimizer whole sequence discrete paths converges dhaus see figure example geodesic path noted introduction certain riemann sum let arg min observe optimality satisfies hence strongly approximates integral given partition length max reason case included theorem linear interpolating costs introduced lip hilb trineq pythag address case respect cost theorem suppose also satisfies lip hilb respect realizations probability set minimum values cost converge minimum lim min min moreover suppose addition satisfies trineq pythag satisfies consider sequence optimal paths arg min subsequence discrete paths subsequence linear paths converges uniformly limit path arg min discrete paths hausdorff sense approximating geodesics via random points unique reparametrization minimizer whole sequence discrete paths converges dhaus remarks make several comments assumptions related issues domain requirements closed connected needed quasinormal path results hold also proof proposition maximum distance nearest neighbor vertex requires domain boundary lipschitz true convex domains convexity domain also ensures linearly interpolated paths within domain allows comparison quasinormal ones definition constrained domain proof limsup inequality lemma addition bound domain allows equicontinuity criterion applied compactness property lemma ellipticity bound useful compare uniform distribution map result proposition well bounding number points certain sets lemma note approximating costs involve density estimators results depend specifics unlike density based distances discussed decay intuitively rate vanish quickly graph may disconnected respect postive set realizations however estimate ensures graph connected large almost proposition version condition related connectivity estimates continuum percolation moreover note prescribed rate yields fact vertex degree tending infinity grows long elliptic one calculates mean number points ball around order grows faster log assumptions somewhat standard assumptions treat parametric variational integrals include basic case assumption assumption useful show existence quasinormal paths proposition compactness minimizers case problematic sense discussed extra assumptions theorems main difficulty showing compactness optimal paths respect theorem form cost allows holder inequality argument deduce equicontinuity paths compactness follows using ascoliarzela theorem however coercivity yet additional assumptions one approximate geodesic locally straight lines several geometric estimates number points small windows around straight lines needed ensure accuracy approximation upperbound useful unique minimizers given results achieve strongest form arg min consists unique minimizing path perhaps reparametrization comment possibility suitable smoothness conditions erik davis sunder sethuraman integrand uniqueness criteria ordinary differential equations allow deduce equations unique geodesic points sufficiently close together proposition chapter hand general nonuniqueness may hold depending structure instance one may construct satisfying standing assumptions several paths penalizing portions induce forks neighbor graphs clear approximation results say theorem hold respect graph graph formed attaching edges vertex nearest points instance arranged along fine regular grid optimal route moving origin staircase path length matter refined grid yet euclidean distance respect random geometric graph setting theorem allows enough choices among nearby points long elliptic optimal path approximate straight line would interest investigate extent results extend neighbor graphs proof theorems corollary mentioned introduction proof theorems relies probabilistic gamma convergence argument establishing basic notation results quasinormal minimizers present three main proof elements liminf inequality limsup inequality compactness following subsections proofs theorems corollary end subsection preliminaries define map point closest respect euclidean distance event tie adopt convention nearest neighbor smallest subscript since random note distortion ktn sup ktn sup min also random proposition appendix show almost surely graph connected large moreover shown exists constant almost surely lim sup ktn log throughout working realizations satisfied let probability event hold observe decay rate holds set realizations ktn rule certain degenerate configurations points let event distinct approximating geodesics via random points since come continuous distribution image lipschitz path lower dimension probability recall definitions quasinormal linear paths proposition let constants path satisfies sup proof lipschitz path also inf standard calculus variations argument see also proposition taking infimum obtain suppose quasinormal constant integrating noting gives hand hence follows finally establish suppose considering application triangle inequality gives however contradiction thus inequality holds liminf inequality first step getting control limit cost terms discrete costs following bound lemma liminf inequality consider suppose sequence paths lim sup sup lim inf proof sufficient condition inequality lower semicontinuity property hold jointly continuous convex see theorem subsequent remark discussion matter limsup inequality make effective use liminf inequality need identify sufficiently rich set sequences reverse inequality holds end develop certain approximations lipschitz paths piecewise linear piecewise quasinormal paths following result gives method recovering element suitable element proposition suppose constants satisfies erik davis sunder sethuraman let say define respect realizations probability set sufficiently large proof show sufficient verify consecutive vertices connected edge words first show note similarly segments incident endpoint max ktn either case assumption decay implies realizations sufficiently large show lipschitz lower bound triangle inequality argument distance bounded set satisfies therefore vanishes slower ktn positive large establish approximation properties obtained interpolating paths points proposition fix satisfying realization probability set let defined proposition obtain lim sup lim sup addition sup proof first argue let let piecewise linear interpolation lipschitz also construction max kid piecewise linear follows kid vanishes approximating geodesics via random points proposition follows hence supn likewise ktn realizations probability set since satisfies therefore vanishes slower ktn ktn hence considering bound follows sup max hence work place proceed main result subsection lemma limsup inequality let satisfy inequality respect realizations probability set may find sequence paths taken either form large lim sup remark sequence last lemma called recovery sequence since liminf inequality lemma limsup inequality lemma together imply limit limn proof let say constant greater define proposition consider paths case proposition interpolated paths converge uniformly consider bound proposition converges almost everywhere supn hence almost every also since application bounded convergence theorem yields desired recovery sequence consider case let proposition follows show sequence write time interval corresponds minimum cost geodesic path moving possibly expensive path erik davis sunder sethuraman case lim sup lim sup compactness subsection consider circumstances sequence paths context theorems limit point respect uniform convergence arguments differ particular consider paths uniformly bounded one follows bounded sobolev space sufficient derive suitable compactness result longer case however general outlook enough establish compactness result sequences optimal paths certain eccentric possibilities ruled begin considering compactness paths lie setting discussed afterwards proposition suppose respect realizations probability set large arg min proof path piecewise quasinormal path form try relate number segments path path energy recall formula let denote open euclidean ball radius around claim see suppose least points let denote points smallest largest index respectively minimality let denote third point points connected graph applying triangle inequality valid noting event thus path satisfies contradicts optimality therefore may thus cover vertices balls centered vertices balls contains two vertices follows subcover balls two containing common point lower bound found considering part contributed portion path portion terminate must visit center boundary hence euclidean length least summing portions obtain euclidean arclength approximating geodesics via random points follows get lipschitz bound recall bound lipschitz constant follows concatenation segments satisfies max obtain finishing proof prove compactness property lemma compactness property suppose large either arg min arg min realizations probability set supn consider following cases suppose paths arg min large suppose paths large supn case case respect realizations probability set relatively compact topology uniform convergence case conclusion holds respect realizations probability set proof first prove bound supn part choose holds lemma sequence either piecewise linear quasinormal paths lim hence minimality respect paths either sup sup argue claims cases cases bounded paths uniformly bounded invoke theorem must show equicontinuous family case lemma realizations cgn independent combining follows equicontinuous respect realizations implies case holds sequence case holds without loss generality focus attention case recall let conjugate erik davis sunder sethuraman combining assumption case supn constant independent hence equicontinuous proof theorems preceding gamma convergence ingredients place proofs theorem similar given together proofs theorems fix realization probability set let sequence paths large either arg min arg min supposing subsequential limit respect topology uniform convergence argue arg min liminf lemma lim inf let arg min inequality proposition exist constants hence limsup lemma exists sequence either piecewise linear quasinormal paths converging uniformly lim recall piecewise linear quasinormal respectively combining minimality lim inf lim sup min arg min case paths piecewise linear since min follows lim min lnk min similarly piecewise quasinormal min gnk min therefore shown subsequential limit exists optimal continuum path arg min consider theorem arg min part theorem arg min compactness lemma supn subsequential limit exists consider part theorem arg min suppose realization belongs also probability set subsequential limits follow compactness lemma approximating geodesics via random points consider subsequence work applied sequence subsequence nkj arg min uniformly settings theorems moreover min lnkj min paths piecewise linear min gnkj min paths piecewise quasinormal since argument valid subsequence recover min min min min respectively paths piecewise linear quasinormal finally unique minimizer considering subsequences whole sequence must converges uniformly proof corollary corollary statement hausdorff convergence order adapt results theorems end make following observation proposition fix realization consider sequence paths large either form suppose converges uniformly limit lim dhaus proof write since uniformly follows lim max inf hand consider case lipschitz constant case using linearity path since path one may bound diverges hence cases lim sup min combining follows dhaus proceed prove corollary proof corollary give argument case piecewise linear optimizers argument exactly piecewise quasinormal paths using theorem instead theorem suppose arg min sequence paths theorem respect probability set realizations subsequence subsequence lnk converges uniformly arg min proposition follows dhaus suppose unique reparametrization minimizer note invariant reparametrization conclude limit points correspond hence whole sequence converges dhaus erik davis sunder sethuraman proof theorems proofs theorems make use theorems comparing costs respect theorem arguments theorems involved especially respect minimal cost convergence several geometric estimates used show compactness principle begin following useful fact proposition suppose functions min min min min proof min taking gives min min inequality follows similarly proof theorem suppose lip implies since neighbors thus homogeneity bounds rescaling gives recall formulas summing gives following estimate relating applying bounded terms hence min suppose arg min proposition min arg min immediate consequence min min min min min theorem min min almost realizations proof shows min min lim sup min min particular min supn min hand min min min supn min observe lim inf min min hence min min finishes one part theorem approximating geodesics via random points address others consider piecewise linear interpolation min moreover noting optimality gives another application yields hence bounded min min hence min min min lim lim also observe consequence supn given compactness lemma respect realizations probability set subsequence uniformly convergent subsequence limit liminf lemma lim inf finally follows min arg min consequently unique minimizer whole sequence converges uniformly almost surely proofs statements hausdorff convergence follow arguments given corollary omitted proof theorems prove theorems two parts proof theorems first prove proposition minimal costs converge min making use comparisions quasinormal paths control theorem second proposition subsection show minimizing paths converge various senses desired main tool proof compactness property proposition minimal shown subsection supply proofs desired propositions obtain useful estimate cost quasinormal path linear one proposition suppose also satisfies lip hilb constant particular quasinormal path connecting erik davis sunder sethuraman proof lipschitz path optimizing recover arclength satisfies particular path constrained euclidean ball around radius note also minimizing euclidean path constant speed straight line times also constrained ball lipschitz path constrained ball expand paths lip respect lipschitz constant therefore respect lipschitz paths constrained note condition hilb cost respect straight lines geodesics particular optimal hence minimal respect moving given invariance parametrization proposition applied two functionals sides obtain max noting arclength bounds proposition suppose also satisfies lip hilb respect realizations probability set minimum values converge minimum lim min lim min min proof consider energies piecewise quasinormal path vertices noting quasinormal path approximating geodesics via random points application proposition noting gives summing gives min last inequality follows applying recall energy similarly directly using lip linear path vertices lvi linear path slope summing using obtain min reprise argument theorem consequence proposition min min min min hence supn min supn min theorem seen proof realizations probablility set min min finite conclude also min min repeat argument place using min min conclude also min converges min compactness property analogous lemma formulate compactness property minimal paths arg min arg min useful consider partition regular grid let let intersection box refer sets boxes understanding boundary results irregularly shaped regardless diameter points connected random geometric graph proposition consider assumptions second parts theorems suppose arg min consider piecewise linear interpolations respect realizations probability subset sequence relatively compact topology uniform convergence suppose arg min conclusion holds optimal linear interpolations erik davis sunder sethuraman proof show sequence equicontinuous almost realizations paths belong bounded set proposition would follow criterion partition boxes lemma number boxes visited bounded large lemma number vertices box bounded constant large thus maximum number points bounded since obtain supi large supi implies piecewise linear paths lipschitz respect fixed constant large particular equicontinuous indeed say wkn consider part path connecting times namely argument holds paths show lemmas used proof proposition first bound number boxes visited optimal path lemma suppose also satisfies lip hilb suppose arg min arg min optimal paths realizations probability set large number distinct boxes visited bounded proof visit path distinct boxes euclidean length least since boxes adjacent recalling formulas visitation therefore cost cost least may bound number boxes visited depends dimension path recalling proposition respect realizations lim min lim min min lemma follows say min next result shows optimal paths arg min arg min long necks gives bound number points nearby edge graph lemma suppose fix realization probability set suppose arg min optimal path let supi suppose respect realizations probability subset large approximating geodesics via random points suppose arg min conclusions hold place proof first show one points euclidean distance away recalling implies path connecting one step would less costly respect since taken minimal points therefore must belong suppose arg min recall form similarly one points away lvk lvi also contradiction minimality consider proof count bound respect argument respect exactly place first bounded number points distinct ball binomial recalling bounded vol constant let max union bound gives exp log log log taking yields noting exp log log log form right hand side summable hence lemma realizations intersection probability set say maxi large follows give lower bound cost certain long necks cost optimal moving away two close vertices lemma suppose also satisfies rineq ythag fix realization probability set suppose arg min optimal path let indices erik davis sunder sethuraman let denote straight line segment consider neighborhood point constant suppose arg min holds place proof argument present suppose point least euclidean distance trineq condition lemma strictly less large also conclude large addition thus pythag obtain hence follows combining inequalities bound number points optimal path box main estimate used proof proposition argument two steps first step using rough count number vertices path within given box may approximate contribution vertices box terms localized cost use pythag applied localized cost deduce optimal path box trapped small set box second step show small sets contain constant number points lemma consider assumptions second parts theorems suppose arg min optimal path respect realizations probability subset large constant suppose arg min statement holds place proof give main argument indicate modifications respect consider box boxes one point trivially satisfy claim lemma say suppose least two points box step let first last points box smallest largest indices respectively lemma hence lip also lemma hence following estimate respect localized energy fixed obtained approximating geodesics via random points similarly considered following reasoning using lemma lip may obtain place moreover lvk hence combining two estimates obtain lvk returning path lemma noting path exiting line segment costlier respect straight path connecting single hop follows let consider since lemma using lip lvi following reasoning given respect may obtain place noting path exiting nneighborhood line segment cost one step cost amount cost differs one step cost lvi moving therefore cost savings moving one step considering exit bounded positive large case fix since hence choice exiting paths optimal points box must belong line segment connecting ith jth points note given value use lemma exponent satisfy afforded assumptions step focus following counting argument respect count points small set cardinality bounded binomial count number points distinct set note nearly cylinder length radius bounded hence union events bound number boxes intersecting bounded erik davis sunder sethuraman suppose form display summable particular part assumptions large fixed chosen holds hence lemma intersection probability set say recover claim large path visits points first last visit visited box convergence optimal paths consider behavior optimal paths analogy theorem energy proposition consider assumptions second parts theorems consider discrete path arg min linear interpolation respect realizations probability subset subsequence correspondingly subsequence linear paths converges uniformly limit path arg min discrete paths hausdorff sense unique reparametrization minimizer whole sequence converges dhaus consider path arg min conclusions holds place proof consider first arg min compactness criterion proposition almost surely subsequence paths subsequence converging uniformly limit liminf lemma lim inf argument conclusion holds arg min place show arg min respect optimal paths min min min proposition obtain min desired conclusion optimal paths recall argument proof theorem using standing assumptions allowing lip derived namely min constant lip consequence proposition saw min min since proposition min min conclude arg min finally remark hausdorff convergences argued proof corollary appendix collect results previously assumed rate proposition let samples probability measure lipschitz domain let sup min approximating geodesics via random points suppose uniformly bounded positive constant exists constant independent almost realizations lim sup log particular satisfies almost surely large path connecting via graph proof first address claim respect let euclidean ball radius centered since lipschitz constant small denotes lebesgue measure discussion cone conditions section follows constant crd small since density bounded positive constant exists constant crd sufficiently small constant therefore recalling crd let collection points may take number points satisfy constant independent say choosing regular grid grid length let denote event consider event exists triangle inequality argument hence together let log gives summable term log lemma log large show connected satisfies let vertices consider line convexity path contained consider points path krn krn point jrn within euclidean distance point large construction euclidean distance less sum distances jrn jrn bounded similarly endpoints within euclidean distance respectively since large path along vertices belongs connected large erik davis sunder sethuraman existence quasinormal minimizers discuss conservation law paths existence lipschitz paths following treatment proposition consider integral functional satisfies attains minimum set lipschitz paths words exists inf case exists arg min constants constants hold arg min proof first give argument case note assumption integrand continuous convex second argument satisfies may extended continuously function domain extended sobolev paths existence minimizer follows remark section particular continuity convexity assumptions imply respect weak convergence sobolev functions existence minimizer follows standard compactness argument let denote sobolev minimizer consider inner variation diffeomorphism interval may shown see discussion pages proposition remark section class inner variations optimality condition together euler identity homogenous functions together imply constant finally assumption follows exist constants almost every particular proposition proved argument case complicated lack compactness respect weak convergence sobolev space well difficulty establishing analogue involved argument relating optimizers optimizers quadratic functional existence lipschitz path arg min satisfying established theorem see also theorem gives alternative argument assumption inequality follows acknowledgement work partially supported aro approximating geodesics via random points references adams fournier sobolev spaces academic press agronowich sobolev spaces generalizations elliptic problems smooth lipschitz domains springer monographs mathematics alamgir von luxburg shortest path distance random neighbor graphs proc int conf machine learning auffinger damron hanson years first passage percolation beardwood halton hammersley shortest path many points math proc cambridge philosophical soc bern emerging challenges computational topology nsf report arxiv bernstein silva langford tenenbaum graph approximations geodesics embedded manifolds technical report department psychology stanford university buttazzo giaquinta hildebrandt variational problems introduction oxford university press oxford burago burago ivanov course metric geometry american mathematical society providence cabello jejcic shortest paths intersection graphs unit disks crane weischedel wardetzky geodesics heat new approach computing distance based heat flow acm trans graph giesen wagner shape dimension intrinsic metric samples manifolds discrete computational geometry von deylen glickenstein wardetzky distortion estimates barycentric coordinates riemannian simplices gelfand smirnov lagrangians satisying crofton formulas radon transforms nonlocal differentials adv math hashimoto sun jaakkola metric recovery directed unweighted graphs proc int conference stat aistats san diego jmlr hildebrandt minimizers parametric variational integrals petersburg math hirsch gloaguen schmidt first passage percolation random geometric graphs application trees adv appl probab howard newman euclidean models percolation probab theory relat fields howard newman geodesics spanning trees euclidean percolation ann probab hwang damelin hero shortest path random points ann appl probab lagatta wehr shape theorem riemannian percolation math phys lagatta wehr geodesics random riemannian metrics commun math phys maz sobolev spaces application elliptic partial differential equations springer grundlehren der mathematischen wissenschaften sapiro distance functions geodesics submanifolds point clouds siam journal applied mathematics alvarez paiva problems finsler geometry handbook differential geometry vol dillen verstraelen elsevier chapter papadopoulos hilbert fourth problem handbook hilbert geometry papadopoulos troyanov european mathematical society penrose random geometric graphs oxford university press oxford erik davis sunder sethuraman keriven cohen others geodesic methods computer vision graphics foundations trends computer graphics vision sajama orlitzky estimating computing density based distance metrics proc int cont machine learning bonn germany sethian fast marching methods siam review tamassy relation metric spaces finsler spaces differential geometry applications tenenbaum silva langford global geometric framework nonlinear dimensionality reduction science zhang jiao geodesics point clouds mathematical problems engineering department mathematics university arizona tucson edavis department mathematics university arizona tucson sethuram
10
analysis sparseva estimate finite sample data case huong james welsh cristian rojas wahlberg mar school department electrical engineering computer science university newcastle australia automatic control access school electrical engineering kth royal institute technology stockholm sweden abstract paper develop upper bound sparseva sparse estimation based validation criterion estimation error general scheme cost function strongly convex regularized norm decomposable pair subspaces show general bound applied sparse regression problem obtain upper bound traditional sparseva problem numerical results used illustrate effectiveness suggested bound key words sparseva estimate upper bound finite sample data introduction regularization well known technique estimating model parameters measured data applications fields related constructing mathematical models observed data system identification machine learning econometrics idea regularization technique solve convex optimization problem constructed cost function weighted regularizer regularized various types regularizers suggested far nuclear norms last decades system identification community regularization utilised extensively work focused analysing asymptotic properties estimator length data goes infinity purpose type analysis evaluate performance estimation method determine estimate acceptable however practice data sample size estimation problem always finite hence difficult judge performance estimated parameters based asymptotic properties especially data length short recently number authors published research material paper presented conference email addresses huong james welsh cristian rojas wahlberg preprint submitted automatica aimed analysing estimation error properties regularized sample size data finite specifically develop upper bounds estimation error high dimensional problems number parameters comparable larger sample size data activities statistics machine learning communities among works paper provides elegant interesting framework establishing consistency convergence rates estimates obtained regularized procedure high dimensional scaling determines general upper bound regularized shows used derive bounds specific scenarios however framework applicable case number parameters comparable larger sample size data whereas typical system identification problem number parameters generally smaller sample size data paper utilize framework suggested develop upper bound estimation error used system identification problem problems implemented using sparseva sparse estimation based validation criterion framework aim derive upper bound estimation error general sparseva estimate apply bound sparse linear regression problem obtain upper bound traditional sparseva problem addition also provide numerical simulation results illustrate suggested bound sparseva estimation error april paper organized follows section formulates problem section provides definitions properties required later analysis general bound sparseva estimation error developed section section apply general bound special case model cast linear regression framework section illustrates developed bound numerical simulation finally section provides conclusions based chosen validation criterion example suggested chosen akaike information criterion aic nlog bayesian information criterion bic suggested prediction error criterion traditional regularization method described recently developed upper bound estimation error estimate unknown parameter vector bound function constants related nature data regularization parameter cost function data length beauty bound quantifies relationship estimation error finite data length relationship easy confirm properties estimate asymptotic scenario developed literature time ago problem formulation let denote identically distributed observations marginal distribution denotes convex differentiable cost function let minimizer population risk inspired goal derive similar bound sparseva estimate want know much sparseva estimate differs true parameter data sample size finite note notation techniques used paper similar however convex optimization problem posed traditional regularization framework paper optimization problem based sparseva regularization task estimate unknown parameter data well known approach problem use regularization technique solve following convex optimization problem arg min regularization parameter norm difficulty estimating parameter using regularization technique one needs find regularization parameter traditional method choose use cross validation estimate parameter different values select value provides best fit validation data cross validation method quite time consuming dependent data specifically interested sparseva sparse estimation based validation criterion framework suggested provides automatic tuning regularization parameters utilizing sparseva framework estimate computed using following convex optimization problem section provide descriptions definitions properties norm cost function needed establish upper bound estimation error note provide brief summary research described paper understood readers find detailed discussion norm said decomposable respect regularization parameter estimate obtained minimizing cost function arg min decomposability norm let consider pair linear subspaces orthogonal complement space defined inner product maps arg min definitions properties norm cost function many combinations norms vector spaces satisfy property example norm advantage sparseva framework natural choices regularization parameter pair defined sparse vector space defined subset cardinality define model subspace shown decomposable respect pair arg min sup sup sequel simplify notation write denote twice differentiable function strongly convex exists hessian satisfies theorem assume norm decomposable respect subspace pair cost function differentiable strongly convex curvature consider sparseva problem following properties hold equivalent statement minimum eigenvalue smaller interesting consequence strong convexity property analysis regularization technique using sparseva section apply properties described section derive upper bound error sparseva estimate unknown parameter upper bound described following theorem strong convexity sup supremum operator based definition one easily see dual norm respect euclidean inner product norm exists lagrange multiplier solution optimal solution sparseva problem satisfies following inequalities chosen inequality geometric interpretation graph function positive curvature term largest satisfying typically known curvature projection operator given inner product dual norm defined kuk projection vector onto space respect euclidean norm defined following dual norm quantity measures well norm compatible error norm subspace shown regularized norm norm error norm norm subspace compatibility constant notice also finite due equivalence finite dimensional norms define orthogonal complement respect euclidean inner product computed follows sup chosen subspace compatibility constant given norm error norm subspace compatibility constant subspace respect userdefined regularization parameter chosen either log suggested suggested remark note sparse regression problem common system identification often used obtain low order linear model regularization proof see appendix section consider convex optimization problem hessian matrix cost function computed remark note theorem intended provide upper bound estimation error general sparseva problem stage hard evaluate quantify value right hand side inequalities still contain term abstract terms however later sections paper general upper bound provide bounds estimation errors specific scenarios one bound estimation error hence usual sense apply theorem order prove strongly convex need prove see requirement coincides requirement persistent excitation input signal system identification problem experiment welldesigned input signal needs persistently exciting order matrix positive definite matrix means condition always satisfied linear regression problem derived well posed system identification problem hence cost function optimization problem generally strongly convex means choice regression matrix satisfies persistent excitation condition exists positive curvature cost function remark bound theorem actually family bounds choice pair subspaces specific scenario goal choose obtain optimal rate bound analysis strong convexity property curvature norm cost function upper bound sparse regression section illustrate apply theorem derive upper bound error sparseva estimate true parameter linear regression model unknown parameter required estimated disturbance noise regression matrix output vector need consider whether exists global curvature regression matrix consider random matrix column independently sampled normal distribution zero mean covariance matrix ensemble question need address whether exists constant value depends gives using sparseva framework chosen norm cost function chosen consider following linear regression model kyn random matrix sampled ensemble notice case chosen times smallest eigenvalue answer question investigate distribution smallest eigenvalue estimate found solving following problem distribution smallest eigenvalue discussed frequently literature especially principal component analysis pca community arg min well known matrix ensemble exists distribution smallest eigenvalue covariance matrix whose parameters depend eigenvalues closed form distribution found paper use following notation exp denotes probability density function pdf normal distribution denotes value chi square distributed degrees freedom existence distribution smallest eigenvalue denote means given probability exists value wmin wmin random ensemble matrix expressed wmin paper denote lower bound global curvature probability developing upper bound present three propositions assist development upper bound estimation error optimization problem related computation universal constant corresponding distribution note reality difficult compute exact distribution using formula suggest use empirical method compute distribution due inexact knowledge formula distribution idea generate large number random matrices compute smallest eigenvalue build histogram values approximation one compute value wmin ensure inequality wmin occurs probability finally compute using formula wmin notation proposition consider optimization problem denote corresponding lagrange multiplier constraint computed proof see appendix section assumptions proposition suppose assumptions hold probability linear regression following assumptions made assumption column regressor matrix mutually independent distributed constant symmetric positive definite matrix exp smax smax maximum element diagonal matrix particular assumption noise vector gaussian entries assumption true parameter weakly sparse smax probability least proof see appendix section constant proposition suppose assumptions hold probability least assumption note convention set corresponds exact sparsity set elements belonging set entries generally set forces ordered absolute values decay certain rate exp smax using definition subspace compatibility constant described section smax maximum element diagonal matrix particular smax denotes cardinality probability least theorem generate upper bound problem need establish upper bound based definition subspace proof see appendix section three propositions state following theorem gives upper bound estimation error case weakly sparse estimates theorem suppose assumptions hold large probability following inequality denotes vector formed smallest magnitude entries define lower bound global curvature regression matrix half smallest eigenvalue probability substituting bound theorem integer following bounds max chosen smax smax smax probability least smax smax smax lower bound curvature regression matrix half smallest eigenvalue probability integer vector formed smallest magnitude entries smax maximum element diagonal matrix chosen probability least smax smax proof integer define set indices largest magnitude entries complementary set corresponding subspaces therefore integer probability least max smax smax section numerical examples presented illustrate bound stated theorem section consider case input gaussian white noise whilst section input correlated signal zero mean smax smax remark bound theorem also family bounds one value fir model structure used order construct sparseva problem number parameters fir model set regularization parameter chosen remark note developed bound theorem depends true parameter unknown constant using similar proof proposition derive assumption upper bound term specifically compute upper bound using different values probability parameters chosen respectively setting probability upper bound correct upper bound compared note plot upper bound true estimation errors logarithmic scale gaussian white noise input section random discrete time system random model order generated using command drss matlab system poles magnitude less gaussian white noise added system output give different levels snr noise level different input excitation signals gaussian white noise variance output noise realizations generated set input output data system parameters estimated using different sample size numerical evaluation plots estimation error versus data length different noise levels displayed figures figures red lines true estimation errors estimates using sparseva framework magenta blue cyan lines upper bounds developed theorem correspond respectively see plots confirm bound developed theorem noise levels becomes large estimation error corresponding upper bound become smaller goes infinity estimation error tend note bounds slightly different chosen values however significantly seen bounds relatively insensitive choice hence always place upper bound term known constant depends nature true parameter addition plot another graph shown fig compare proposed upper bound true estimation ern rors corresponding different value pec log aic bic blue lines upn per bounds developed theorem correspond therefore theorem see estimation error confirms result asymptotic case sparseva estimate converges true parameter developed bounds true estimation errors bic true estimation errors aic true estimation errors pec true estimation errors log log fig plot estimation error gaussian white input signal fig plot proposed bound true estimation errors corresponding different choices white gaussian input signal magenta bic green aic red pec choices log true estimation errors section random discrete time system random model order generated using command drss matlab system poles magnitude less white noise added system output different levels snr noise level different input excitation signals output noise realizations generated set input output data system parameters estimated using different sample sizes coloured noise input fig plot estimation error gaussian white input signal input signal generated filtering zero mean gaussian white noise unit variance filter true estimation errors log due filtering covariance matrix regression matrix distribution diagonal form addition ensure columns regression matrix mutually independent constructing regression matrix take output every nts seconds sampling time allows evaluate tightness upper bound theorem note completely different scenario section fig plot estimation error gaussian white input signal fir model structure used order construct linear regression sparseva problem number parameters fir model set regularization parameter chosen three different values magenta bic green aic red pec lines true estimation errors estimates value using sparseva framework see plot confirms validity proposed upper bound compute upper bound using different values probability parameters chosen respectively setting probability upper bound correct upper bound compared note plot upper bound true estimation errors logarithmic scale log fig plot estimation error coloured input signal conclusion paper provides upper bound sparseva estimation error general case choice strongly convex cost function decomposable norm also evaluate bound specific scenario sparse regression estimate problem numerical results confirm validity developed bound different input signals different output noise levels different choices regularization parameters log true estimation errors plots upper bound stated theorem true estimation error displayed figures figures red lines true estimation errors estimates using sparseva framework magenta blue cyan lines upper bounds developed theorem correspond respectively see plots confirmed bound developed theorem noise levels becomes large estimation error corresponding upper bound become smaller goes infinity estimation error tend note bounds slightly different chosen values however significantly seen bounds relatively insensitive choice true estimation errors references fig plot estimation error coloured input signal log true estimation errors fig plot estimation error coloured input signal bickel ritov tsybakov simultaneous analysis lasso dantzig selector annals statistics borwein zhu variational approach lagrange multipliers journal optimization theory applications boyd vandenberghe convex optimization cambridge university press candes tao dantzig selector statistical estimation much larger annals statistics fazel hindi boyd rank minimization heuristic application minimum order system approximation proceedings american control conference fazel hindi boyd heuristic matrix rank minimization applications hankel euclidean distance matrices proceedings american control conference foucart rauhut mathematical introduction compressive sensing basel welsh blomberg rojas wahlberg reweighted nuclear norm regularization sparseva approach proceedings ifac symposium system identification proof see supplementary material huang horowitz asymptotic properties bridge estimators sparse regression models annals statistics james distributions matrix variates latent roots derived normal samples annals mathematical statistics knight asymptotics estimators annals statistics negahban ravikumar wainwright unified framework analysis decomposable regularizers statistical science osborne presnell turlach lasso dual journal computational graphical statistics pillonetto dinuzzo chen nicolao ljung kernel methods system identification machine learning function estimation survey automatica rojas hjalmarsson sparse estimation based validation criterion ieee conference decision control european control conference pages rojas hjalmarsson sparse estimation polynomial rational dynamical models ieee transactions automatic control stoica system identification prentice hall tibshirani regression shrinkage selection via lasso journal royal statistical society series tikhonov arsenin solutions problems winston sons vershynin introduction analysis random matrices eldar kutyniok editors compressed sensing theory applications cambridge university press quote following lemma modification fit notation used sparseva problem lemma helps find important properties related sparseva estimate based properties next section derive upper bound estimation error note notational simplicity denote lemma consider convex optimization problem pair property solution problem lagrange multiplier following hold function attains minimum proof see proof theorem third condition cited theorem complementary slackness condition reduces condition lemma background knowledge confirmed existence lagrange multiplier consider function defined follows first cite lemma directly enable proof theorem constructed lemma norm decomposable respect vectors recall proof theorem first need prove exists lagrange multiplier sparseva problem assume without loss generality since otherwise take according lagrange multiplier convex optimization problem constraint exists slater condition satisfied specifically sparseva problem lagrange multiplier exists exists always exists parameter vector take therefore exists lagrange multiplier sparseva problem appendix using strong convexity condition applying inequality dual norm euclidean projection onto see section similarly terms definition subspace compatibility next combining inequality triangle inequality since therefore combining lemma therefore substituting gives note quadratic polynomial exists makes must satisfy notice estimate sparseva problem property lemma states function attains minimum means since hence applying inequality taking defining combining case using similar analysis case consider two cases case substituting obtain using inequality yields kyn kyn applying inequality first term gives subdifferential computed form note proof similar one expression derived lagrange multiplier traditional norm regularization problem lasso derived lagrange multiplier sparseva problem given proof proposition let rewrite sparseva problem arg min since also write form note means therefore combining also using property lemma solution sparseva problem lagrange multiplier note therefore proof proposition linear regression choice lagrangian form using lemma lagrangian optimization problem denote jth row matrix computed expression parentheses right hand side monotonically decreasing rtj using gives smax probability least proof proposition solution problem denote jth row matrix becomes assumption using argument proposition element distributed consider variable rtj since element expression bounded using standard result exp obtain exp rtj smax note rtj denotes pdf normal distribution rtj gives smax gives probability probability least equivalently hence variance distribution variable order derive bound first derive upper bound variance rtj distribution since element rtj hence probability exp smax particular taking consider variable using assumption disturbance noise property related sum independent normal random variables rtj holds probability least smax denotes maximum diagonal element matrix note contains jth element column matrix following assumption element rtj bound follows events holds necessarily independent joint probability bounded like acn probability least equivalently note section probability combining inequality gives probability hence probability means exp smax probability least following reasoning proof proposition therefore exp smax probability least taking smax gives smax smax probability least gives
10
digital predistortion spurious emission suppression noncontiguous spectrum access aug mahmoud abdelaziz student member ieee lauri anttila member ieee chance tarver student member ieee kaipeng student member ieee joseph cavallaro fellow ieee mikko valkama senior member ieee transmission schemes combined high requirements pose big challenges radio transmitter power amplifier design implementation due nonlinear nature severe unwanted emissions occur potentially interfere neighboring channel signals even desensitize receiver frequency division duplexing fdd transceivers article suppress unwanted emissions dpd solution specifically tailored spectrally noncontiguous transmission schemes devices proposed proposed technique aims mitigating selected spurious intermodulation distortion components output hence allowing substantially reduced processing complexity compared classical linearization solutions furthermore novel decorrelation based parameter learning solutions also proposed formulated offer reduced computing complexity parameter estimation well ability track timevarying features adaptively comprehensive simulation measurement results provided using commercial lteadvanced mobile evaluate validate effectiveness proposed solution real world scenarios obtained results demonstrate highly efficient spurious component suppression obtained using proposed solutions index filters carrier aggregation digital predistortion frequency division duplexing nonlinear distortion power amplifier software defined radio spectrally agile radio spurious emission ntroduction spectrum scarcity data rate requirements two main motivating factors behind introducing carrier aggregation modern wireless communication systems transmission multiple component carriers ccs different mahmoud abdelaziz lauri anttila mikko valkama department electronics communications engineering tampere university technology tampere finland chance tarver kaipeng joseph cavallaro department electrical computer engineering rice university houston work supported finnish funding agency technology innovation tekes project future networks using reconfigurable antennas funera linz center mechatronics lcm framework austrian programme work also funded academy finland projects massive mimo advanced antennas systems signal processing mmwaves fundamentals ultra dense networks application machine type communication competitive funding strengthen university research profiles work also supported part national science foundation grants frequencies adopted simultaneously either within band intraband different bands interband interband naturally leads transmit spectrum also happen intraband case adopted component carriers located manner adopting provides lots flexibility spectrum use also poses substantial challenges transmitter design especially devices user equipment moreover multicarrier type modulations large power ratio papr used wireless communication systems combined noncontiguous transmission schemes high requirements controlling transmitter unwanted emissions due power amplifier nonlinearity becomes true challenge comparison different transmitter architectures based transmission schemes presented showing power efficient combine ccs thus use single instead separate also technically feasible especially intraband case power savings quite significant since overall transmitter efficiency dominated particularly critical mobile terminals devices basestations limited computing cooling capabilities however power efficiency gained using single ccs comes expense severe unwanted emissions stemming nonlinear characteristics occur adjacent channels bands even receiver band fdd transceiver case levels different unwanted emissions general regulated standardization bodies recently demonstrated context mobile transmitter noncontiguous multicluster type transmission nonlinearities lead spurious emissions seriously violate given spectrum spurious emission limits properly controlled furthermore fdd devices addition violating general spurious emission limits generated spurious components also overlap device receive band causing receiver desensitization one obvious solution decrease levels unwanted emissions back transmit power saturation region terminology known maximum power reduction mpr mpr values allowed use cases mobile terminals however approach end yielding significantly lower efficiency well substantial reduction network coverage thus clear need alternative linearization solutions drastic adverse effects digital predistortion dpd general one effective solutions mitigating transmitter nonlinear distortion substantial research efforts past years developing ever efficient elaborate dpd techniques mostly macro type devices conventional dpd approaches seek linearize full composite transmit signal thus refer solutions dpd article also handful recent works efficient concurrent linearization techniques transmitters employ single works assume component carriers separated large distance spurious emissions filtered transmit filter hence linearization main carriers pursued complementary methods scope paper introduce dpd solution suppressing spurious emissions transmission cases concentrating specifically linearization main component carriers hereafter refer linearization solutions dpd approach motivated following two factors first emission limits spurious region generally stricter spectral regrowth region around component carriers thus easily violated recognized also recently context intraband noncontiguous carrier aggregation moreover even interband carrier aggregation scenarios fdd devices spurious components hitting band causing receiver desensitization second concentrating linearization efforts critical spurious emissions processing instrumentation requirements significantly relaxed thus potentially facilitating dpd processing also mobile terminals devices applies dpd main path processing complexity also feedback receiver instrumentation complexity also substantially reduced recent studies literature consider mitigation spurious emissions explicitly processing added complement concurrent linearization system assuming response within spurious dpd parameter estimation done offline based extracting parameters using network analyzer lsna measurements covering processing memoryless fit observed intermodulation distortion imd considered certain basis functions performed basis functions generated using wideband composite carrier baseband equivalent signal followed filtering implying high sample rate processing ity especially widely spaced carriers estimated regenerated imd applied input oppositely phased canceled output hand nonlinearities specifically targeted explicit lowrate behavioral modeling baseband equivalent emissions furthermore parameter estimation dpd based feedback learning rule shown better linearization performance compared inverse solution terms spurious emission suppression moreover fpga implementation thirdorder dpd presented demonstrating fast reliable performance real time constraints recent overview article highlighted main principles advantages low complexity subband dpd solutions concentrating details dpd processing parameter estimation adaptation algorithms technical level hand flexible dpd solution also proposed authors optimize dpd coefficients minimize nonlinear distortion particular frequency spurious regions however like fullband dpd techniques requires high sampling rates transmitter feedback receiver carrier spacing ccs increases article extend elementary dpd solution proposed authors two ways first dpd extended incorporate processing based explicit modeling spurious components enhance spurious emission suppression considerably furthermore also extend dpd solution include thus offering flexibility linearization capabilities beyond basic proposed solutions derived wideband nonlinear pas memory furthermore also formulate novel parameter estimation methods covering learning rules efficiently identify needed dpd parameters low complexity proposed learning solutions also shown offer better performance earlier proposed inverse based methods also provide comprehensive simulation measurement results using commercial mobile evaluate validate effectiveness proposed solutions real world scenarios rest article organized follows section presents mathematical modeling different considered spurious components different produced nonlinear memory stemming modeling also corresponding core processing principles proposed dpd solutions different formulated section iii presents proposed dpd parameter learning solutions covering decorrelation solutions different subbands section addresses different implementation alternatives proposed dpd concept also fig illustration different intermodulation distortion components created nonlinear excited signal two component carriers nonlinear distortion components order shown analyzes computing complexity dpd terms number floating point operations together hardware complexity aspects finally sections report comprehensive simulation measurement results evidencing excellent spurious emission suppression different realistic scenarios purious omponent odeling roposed band dpd rocessing manuscript assume practical case noncontiguous carrier aggregation two component carriers noncontiguous signal applied input nonlinearity leads intermodulation distortion different shown fig assuming separation addition spectral regrowth around main carriers intermodulation two ccs yields strong imd integer multiples main ccs article refer intermodulation located main ccs similarly located main ccs general includes nonlinear distortion components different orders shown fig example case nonlinearity contains third fifth nonlinearities contains fifth components section start fundamental modeling nonlinear distortion concrete example stemming modeling formulate basic processing proposed dpd modeling extended cover nonlinear distortion components followed corresponding dpd processing actual parameter estimation learning algorithms proposed dpd structures presented details section iii modeling developments adopt wideband parallel hammerstein model describe fundamental nonlinear behavior shown model accurately measured nonlinear behavior different classes true pas spurious component modeling modeling carried composite baseband equivalent level two component carriers assumed separated composite baseband equivalent input output signals order parallel hammerstein model monomial nonlinearities fir branch filters respectively read fif fif odd baseband component carrier signals denotes branch filter impulse responses order convolution operator intermodulation two component carriers leads appearance imd components shown fig corresponding spectrum concrete example let analyze imd direct substitution allows extracting baseband equivalent distortion terms manipulations yields yim odd denote baseband equivalent impulse responses corresponding wideband model filters evaluated formally defined fif denoting ideal low pass filtering operation passband width times bandwidth wider furthermore corresponding pth order static nonlinear snl basis functions related nonlinear distortion subbands respectively assuming order model concrete example basis functions read frf odd fig proposed dpd processing principle focusing thick lines used indicate complex processing presentation simplicity filtering feedback coupler antenna shown variables indicate continuation use dpd processing corresponding predistorted signals dpd processing principle illustrated fig conceptual level duplexer filters omitted simplicity presentation since directly impact dpd processing learning similar convention followed also figures different implementation alternatives well parameter learning feedback receiver aspects addressed details sections iii generalization corresponding basis functions obtained simply interchanging expressions next behavioral modeling results utilized formulate proposed dpd concept specifically tailored suppress distortion direct substitution leads appearance spurious intermodulation terms also subbands addition previously considered illustrated already fig emissions also harmful since violate emission limits cause receiver desensitization thus developing dpd solution tackle also distortion feasible complexity important similar developments next extract imd terms higherorder baseband equivalent imd terms concrete examples extracted using interpreted proper yielding yim odd yim odd proposed dpd principle presentation purposes focus suppressing spurious emissions corresponding emissions also easily mitigated using similar structure minor changes elaborated later section key idea dpd concept inject proper additional lowpower cancellation signal structural similarity located input level term output reduced stemming imd structure appropriate digital injection signal obtained adopting basis functions combined proper filtering using bank dpd filters incorporating subband dpd processing polynomial order composite baseband equivalent input signal reads yim odd yim odd models denote baseband equivalent impulse responses corresponding wideband model filters evaluated respectively formally obtained similar replacing either respectively pth order basis functions denoted assuming order model read corresponding baseband equivalent imd terms negative obtained simply interchanging stemming distortion structure adopting similar ideology previous subband dpd case natural injection signal suppressing spur components obtained properly filtering combining basis functions specific filters pth order basis functions denoted incorporating dpd processing dpd polynomial order aggregating time parallel dpds simultaneously concrete example composite baseband equivalent input signal reads odd subsection derive three analytical reference solutions calculating dpd coefficients considered approaches inverse solution also adopted power minimization solution analytical solution keep analytical developments simple tractable consider simplified case memoryless dpd memoryless actual decorrelation based learning solutions devised later section formulated general cases processing memory starting dual carrier signal limiting study simplified case memoryless dpd memoryless basic signal models given yim fif odd analytical reference solutions analytical reference solutions taking subband dpd simple tractable example order demonstrate minimizing power essentially identical decorrelating observation corresponding dpd basis functions decorrelation based learning rules devised covering general cases higherorder processing memory odd general achievable suppression distortion different depends directly selection optimization different filters addressed detail next section memoryless model parameters denotes memoryless dpd parameter optimized yim refer baseband equivalent output positive without dpd respectively clearly shows distortion output dpd adopted depends directly thus controlled dpd coefficient wellknown inverse solution dpd parameter selected term canceled corresponding solution denoted thus reads iii band dpd parameter earning sing ecorrelation rinciple section based previous spurious component modeling proposed dpd principle formulate computing feasible highly efficient estimation algorithms learning optimizing dpd filter coefficients spurious emissions minimized considered start deriving however remove distortion terms created due predistortion shown elaborate method selecting dpd parameter one minimizes power total signal referred minimum power minimum error mmse solution continuation notational convenience define error signal ideal predistortion signal would zero thus optimization means minimizing power error signal detailed derivation optimum dpd parameter minimizes mean squared error given appendix yielding learning subsection provide actual learning rules dpd structures memory whose basic operating principles described section facilitate learning assume feedback observation receiver measuring particular whose dpd filter coefficients currently learning notice opposed classical band dpd principles wideband observation receiver typically needed narrowband receiver sufficient since particular observed dpds based ieij refers products signals multiple strongly correlated basis functions given order moments form respectively start introducing basis notes statistical expectation operator function orthogonalization procedure actual shown concrete examples section proposed adaptive decorrelation algorithms described mmse solution provides better linearization performance basis function orthogonalization nonlinearcompared inverse solution however ity order higher order considered shown analytical mmse solution requires multiple basis functions adopted dpd knowledge various moments signals processing described section taking example furthermore solution valid case case snl basis functions memoryless nonlinear system beyond given highly correlated obtaining analytical expression dpd coefficients negatively impact convergence stability becomes overly tedious thus relax constraints adaptive learning therefore snl alternative solution based minimizing correlation basis functions first orthogonalized yields tween observation basis new set dpd basis functions function initially discussed written formally sample level extended higher nonlinearity orders higherorder paper analytical reference solution dpd structure obtained setting correlation error signal basis function zero forward algebraic manipulations dpd coefficient denoted shown read assuming simplicity baseband equivalents complex gaussians simplifies variances two ccs clear advantage approach compared earlier mmse solution lies simple straightforward adaptive filtering based practical computing solutions sketched initially simple thirdorder processing require prior knowledge signal moments parameters furthermore opposed mmse solution decorrelation based approach easily extended include nonlinearities memory effects well described details next subsection lower triangular matrix obtained orthogonalization singular value decomposition using lower complexity iterative orthogonalization algorithm similarly snl basis functions corresponding also orthogonalized obtain new sets orthogonal basis functions etc adaptive learning present actual decorrelation based learning algorithm dpd coefficients notational convenience introduce following vectors upsampling upconversion adaptive higher order dpd fif third order decorrelator upsampling upconversion lpf lpf yim adaptive algorithm dsp fifth order decorrelator baseband snlq lpf qth order decorrelator fig detailed block diagram dpd nonlinearities memory denotes lth adaptive filter coefficient pth order orthogonalized basis function time index denotes adaptive filter memory depth furthermore vectors incorporate coefficients basis function samples polynomial order adopting notation instantaneous sample composite baseband equivalent input signal reads objective function target search dpd coefficients minimize ensemble correlation baseband equivalent injection signal overall processing flow graphically illustrated fig adaptive learning extend decorrelation based learning first adopting notations introduce following vectors notational convenience fif instantaneous sample baseband equivalent injection signal reads order adaptively update filter coefficients observed feedback receiver coefficient updated denotes baseband equivalent observation output current dpd coefficients scaling factor normalizing learning philosophically similar normalized least mean square nlms algorithm effect making learning characteristics robust input data dynamics proposed coefficient update seeking decorrelate observation adopted orthogonalized basis functions type learning algorithm also interpreted stochastic newton root search similar baseband equivalents dpd injection signals denoted respectively read thus composite baseband equivalent input signal dpds included reads fif fif fif samples samples estimation block size samples dpd update interval samples estimation block estimation block update dpd paramters update dpd paramters upsampling upconversion fig dpd learning concept dpd parameters estimated current estimation block applied dpd main path processing next block similar coefficient update coefficient updates read dpd upsampling upconversion dpd upsampling upconversion dpd upsampling upconversion dpd upsampling upconversion imp dpd upsampling upconversion imp dpd upsampling upconversion mod dac error signal feedback receiver decorrelating dpd dpd architecture digital injection denote baseband equivalents output corresponding dpds included adopting current coefficients respectively learning perspective observing single time obvious alternative means learning multiple dpds happens one time furthermore extending learning negative obtained interchanging snl basis functions expressions observing output corresponding negative upsampling upconversion dac dpd dac analog upconversion dpd dac analog upconversion dpd dac analog upconversion dpd dac analog upconversion imp dpd dac analog upconversion imp dpd dac analog upconversion mod error signal feedback receiver decorrelating dpd learning fig illustrates previous learning concept principle closedloop feedback system nonlinear adaptive processing inside loop dpd learning phase potential hardware processing latency constraints dpd parameter convergence consequently dpd linearization performance affected especially learning loop delay becomes large stemming alternative new learning solution developed next proposed learning rule implies defining two distinct blocks processing illustrated fig single update cycle learning algorithm utilize samples whereas dpd parameter update interval samples thus proper choice arbitrarily long loop delays principle tolerated facilitating stable operation various hardware software processing latency constraints notice proper timing synchronization observation receiver output basis functions general needed accomplished prior executing actual coefficient learning procedure assuming estimation block size samples dpd filter memory depth per orthogonalized basis functions following vectors matrices dpd architecture analog injection fig overall dpd architecture multiple included thick lines indicate complex processing stack samples corresponding dpd filter coefficients within block defined index first sample block denoted dpd coefficient update reads refers conjugated error signal vector denotes filter input data matrix within processing block obtained new table omparison running complexities ninth order sub band full band dpd carrier spacing assumed dpd sample rate msps full band msps sub band dpd basis function generation flops dpd filtering flops total number flops gflops dpd dpd coefficients applied next block samples illustrated fig presentation describes learning extending principle straightforward thus explicitly shown mplementation spects omplexity nalysis one main advantages proposed decorrelationbased dpd technique reduced complexity compared classical dpd processing especially scenarios ccs widely spaced thus high speed adcs dacs required classical solutions section first address implementation aspects proposed dpd concept provide thorough comparison computing hardware complexity perspectives proposed dpds finally system power efficiency considerations also presented particular focus mobile devices dpd implementation alternatives fig shows two alternative architectures overall dpd processing multiple first architecture shown fig adds upsampling digital upconversion block dpd stage order digitally place generated injection signals proper intermediate frequencies adding dpd outputs digital domain single wideband dac per branch used signal upconverted amplified second architecture fig adds outputs dpds analog domain implying dpd followed dac per branch together analog complex upconversion common modulator used prior module two architectures advantages disadvantages particular carrier spacing component carriers large architecture likely suitable hand carrier spacing starts increase using single wideband dac may efficient cost power consumption perspectives case architecture likely attractive however extra processing may required architecture order achieve proper synchronization dacs forms interesting topic future research dpd common advantage architectures subband dpd block switched according prevailing emission levels limits considered subbands flexibility available classical fullband dpd solutions since design predistorter always tries linearize full composite transmit band dpd concept linearization flexibly tailored optimized frequencies critical emission limits perspective versus dpd running complexity general computational complexity dpd classified three main parts identification complexity adaptation complexity running complexity identification part basically estimation complexity dpd parameters adaptation complexity includes required processing order adapt new operating conditions device aging finally running complexity critical especially devices involves number computations done per second dpd operating subsection focus dpd running complexity details dpd parameter identification feedback receiver complexity perspectives shortly discussed section quantitative comparison running complexities shall use number floating point operations flops per sample number dpd coefficients required sample rate predistortion path main quantitative metrics general running complexity divided two main parts first basis function generation second actual predistortion filtering using basis functions number flops required perform two operations shown table dpds respectively corresponding memory depths per adopted basis function dpd architecture use also comparative performance simulations section also widely applied otherwise based architecture ninth order nonlinearity dpd based architecture shown fig frequency selectivity nonlinear another important factor considered comparing two dpd architectures implies memory effects considered substantially longer filters needed dpd compared dpd certain performance requirement complexity analysis dpd case thus assume memory depth per basis function memory depth assumed dpd per basis function substantial reduction needed sample rates achieved adopting dpd concrete example consider challenging scenario two mhz ccs mhz carrier spacing required sample rate dpd dpd becomes consequently shown table huge reduction number flops per second flops achieved using dpd furthermore processing complexity dpd solution clearly feasible modern mobile device processing platforms terms gflops count classical fullband dpd clearly infeasible however scenarios carrier spacing ccs decreased bandwidth increases benefit using dpd approach reduced since sample rates fullband dpd become comparable notice also interband cases one adopt concurrent linearize main carriers easily complemented dpd processing protect receiver filtering offer sufficient isolation scenario could take place uplink band mhz band mhz interband technologically feasible adopt single module amplification scenario directly frequencies band mhz feedback receiver parameter estimation complexity addition complexity reduction dpd main processing path complexity feedback observation receiver used dpd parameter estimation adaptation also greatly reduced order estimate parameters dpd need observe output instead observing fullband including case dpd reduces cost complexity power consumption feedback path allowing use simpler instrumentation particular adc moreover single observation receiver required even linearizing multiple since parameter learning corresponding observation output done sequentially per manner finally terms parameter estimation algorithmic complexity proposed decorrelation based solutions extremely simple compared classical dpd related method based parameter fitting indirect learning architecture commonly adopted overall transmitter power efficiency perspectives dpd adopted less linear efficient operate near saturation region generally used however overall power efficiency device improved extra power consumed dpd stage less power savings due increased efficiency address aspect mobile device perspective consider practical scenario transmit power output mobile dbm stemming requirements assuming duplexer filter connector insertion losses good examples practical power efficiency figures operating highly nonlinear linear modes around even less respectively means power consumed highly nonlinear roughly mwatt corresponding linear consumes roughly mwatt words adopting highly nonlinear saves mwatt power particular example order suppress spurious emissions output nonlinear case adopt dpd solution state art implementation qualcomm hexagon dsp capable supporting gflops ghz reported enough carrying needed dpd processing example fullband dpd case hand requires gflops shown table clearly insufficient power consumption dsp platform shown approximately mwatt running mhz gflops assuming flops per cycle sufficient linearizing example thus adopting nonlinear already saves milliwatt comes interface explained complemented dpd processing enhanced linearity milliwatt additional power consumed thus overall power budget clearly favor using highly nonlinear complemented dpd structure even mobile device uplink carrier aggregation furthermore dpd processing implemented using dedicated hardware solution digital asic even dpd stage likely realized imulation esults section quantitative performance analysis proposed dpd solution presented using matlab simulations practical models pas designed devices general quantify suppression intermodulation power power ratios relative component carrier wanted signal power shown fig defined pwanted pwanted pim pim inband transmit waveform purity measured error vector magnitude evm defined perror pref table omparison suppression evm third order inverse mmse decorrelation based analytical sub band dpd solutions output power dpd inverse analytical analytical mmse positive dbc evm perror power error signal pref reference power ideal symbol constellation error signal defined difference ideal symbol values corresponding synchronized equalized samples output normalized identical linear gains comparison analytical dpd solutions section analytical reference expressions inverse minimum mse dpd solutions presented subsection shortly evaluate compare performance analytical solutions assuming memoryless known parameters focus positive simplicity memoryless model identified using true mobile transmit power main objective compare performance three analytical solutions terms linearization performance thereon verify decorrelation based solution essentially identifcal minimum mse solution signal used performance evaluation composed two ccs qpsk data modulation spacing obtained results shown table demonstrate decorrelationbased solution giving almost performance minimum mse solution also substantially better classical inverse solution terms imd suppression considered also seen table evm essentially affected dpd processing inverse dpd solutions next evaluate compare performance inverse dpd solutions inverse reference solution derived appendix simulations transmit waveform composed two mhz scfdma component carriers qpsk data modulation spacing mhz model turn memoryless model whose parameters identified using true mobile transmitting dbm inverse dpd using known parameters one adopting proposed sampleadaptive learning described detail section output spectra different solutions illustrated fig clearly seen fig baseband equivalent output spectrum dbm two mhz carriers mhz separation using memoryless inverse dpd solutions compared nonlinear processing dpd substantially outperforms inverse based dpd third cases despite fact inverse solutions using known parameters reason inverse solutions cancel third fifth order terms described appendix suppress induced terms structural similarity correlation third basis functions hand proposed solution takes explicitly account thus achieves clearly better spurious emission suppression figure also illustrates performance dpd clearly better one remaining parts section shall focus detailed performance evaluations proposed dpd solution incorporating memory predistortion processing refer dpd simply dpd simplify presentation performance proposed dpd subsection evaluate performance proposed dpd realistic case memory model parallel hammerstein model four memory taps per branch parameters identified using measurements true mobile transmitting dbm dpd structure contains also memory two taps per basis function learning principle adopted blocks containing samples transmit waveform otherwise identical earlier cases separation mhz fig shows effectiveness proposed higherorder dpds processing table iii omparison quantitative running complexity linearization performance full band versus sub band dpd carriers qpsk data modulation carrier spacing used dpd ila dpd dpd dpd dpd coeffs running complexity msps gflops evm transmitter performance positive dbc positive dbc fig baseband equivalent output spectrum dbm two mhz carriers mhz separation using memory different orders dpd compared memory depth equal per dpd snl basis function fig spurious emissions power using different orders dpd processing memory depth equal per dpd snl basis function memory used filter insertion loss assumed compared basic solution presented earlier suppression spurious emissions shown adopting processing something output power dbm fig baseband equivalent two mhz carriers mhz separation using ninth order extracted real mobile dbm ninth order negative dpd solutions shown memory depth equal per dpd snl basis function reported prior works using practical model memory effects fig shows spurious emission level changes varying power using different dpd orders spurious emissions clearly general spurious emission limit even high powers dbm using higher subband dpd another main contribution paper extension dpd solution include also fig shows performance negative dpds addition negative dpd nonlinear processing adopted dpds suppression achieved negative respectively shows effectiveness proposed solutions processing suppressing also spurious emissions higher versus dpd complexity performance analysis subsection compare performance proposed dpd technique classical dpd adopting parallel hammerstein based wideband linearization indirect learning architecture ila based processing generate baseband samples transfer block samples vst modulation upconversion extract received block samples vst downconversion demodulation estimate dpd parameters using blockadaptive estimation processing transmit predistorted signal vst vector signal transceiver vst mobile attenuator vst fig hardware setup used measurements testing evaluating proposed dpd evaluation considers linearization performance complexity transmitter efficiency utilizing earlier complexity analysis results reported section fullband ila dpd uses samples parameter learning per ila iteration total ila iterations number memory taps per branch dpd turn uses also total samples block size adopts two memory taps per dpd snl basis function results collected table iii shows addition significantly lower complexity measured number gflops dpd achieves better linearization performance terms spurious imd suppression considered hand dpd outperforms dpd terms inband distortion mitigation evm expected since dpd linearizes whole transmit band including main ccs imd spurious emissions however evm dpd around far sufficient modulations least additionally dpd based ila structure requires additional back guarantee stable operation required dpd thus transmitter becomes power efficient using dpd shown table iii fairness acknowledged dpd typically enhance evm aclr subband dpd concept specifically targeting spurious emissions easurement esults order demonstrate operation proposed dpd solution next report results comprehensive measurements using commercial lteadvanced mobile terminal together vector signal transceiver vst implementing modulation demodulation actual dpd processing parameter learning algorithms running host processor mobile power amplifier used measurements designed band mhz gain national instruments vst includes vector signal generator vsg vector signal analyzer vsa mhz instantaneous bandwidth experiments digital baseband waveform divided blocks size samples first generated locally host processor transferred vsg perform modulation desired power level input vst output connected input port external power amplifier whose output port connected vst input attenuator implementing observation receiver illustrated also fig vsa performs demodulation bring signal back baseband baseband observation block filtered select used dpd learning proper alignment locally generated basis functions explained section dpd block size used experiments dpd memory depth general two different measurement examples demonstrated subsection first experiment demonstrates violation spurious emission limit due inband emission spur thus attenuated filter second experiment demonstrates desensitization example fdd transceiver context spur located band mhz sufficiently attenuated duplexer filter notice principle tackling specifically desensitization problem proposed dpd solution main receiver spurious emission limit violation band spur limit fig band measurement example output showing gain using dpd spur reduction third fifth seventh dpds demonstrated using real commercial mobile operating dbm signal two mhz ccs mhz carrier spacing used band band fig band measurement example output showing gain using dpd falling band signal two mhz ccs mhz carrier spacing used real commercial mobile operating dbm fdd device could potentially used observation receiver learning dpd coefficients without extra receiver however approach would indeed applicable desensitization case mitigating harmful emissions auxiliary observation receiver would anyway adopted thus measurements adopt auxiliary receiver based approach consider developments main receiver based parameter learning important topic future work fig shows measured power spectral density output using dpd different orders adopted waveform signal mhz per mhz carrier spacing power level output example dbm intermodulation distortion emitted inband since total band covers mhz clearly violating spurious emission limit dpd used using dpd spurious emission level well emission limit given least processing deployed general measured spurious emission suppression achieved ninth order dpd thus giving additional gain compared basic thirdorder dpd shown fig notice need predistorting example since emissions filtered filter receiver desensitization fig illustrates another band example mhz bandwidths mhz carrier spacing duplexing distance band mhz thus spur mhz example falling band therefore potentially desensitizing receiver power level output example dbm adopting dpd spurious emission suppression achieved seen fig two taps per basis function used parameter learning deployed fig shows convergence dpd coefficients third fifth seventh basis functions respectively seen coefficients converge stable manner real environment due orthogonalization snl basis functions explained section general assuming duplexer filter attenuation band band duplexer integrated power spur band without dpd approximately effective noise floor assuming noise figure would thus cause significant receiver desensitization could lead complete blocking desired signal hand proposed dpd deployed integrated power spur band approximately effective noise floor shown also fig though residual spur still slightly effective noise floor sensitivity degradation substantially relaxed despite operating maximum output power dbm elaborate fig showing integrated power spur lna input changing output power level ninth order subband dpd transmit dbm output power effectively perfectly linear terms spur fig convergence dpd coefficients memory depth per basis function positive considered learning deployed measurement example two mhz ccs mhz carrier spacing used real commercial mobile operating dbm methods also formulated allowing efficient estimation tracking low computational complexity algorithm derivations modeling carried general case memory well dpd processing different nonlinear distortion processing orders beyond classical cases also reported proposed technique find application suppressing inband spurs would violate spurious emission limit suppressing spurs falling receiver band protecting primary user transmissions cognitive radio systems quantitative complexity analysis presented comparing proposed solution conventional dpd solutions available literature performance evaluated comprehensive manner showing excellent linearization performance despite considerably reduced complexity compared classical solutions finally extensive measurement results using commercial mobile power amplifier reported evidencing suppression problematic spurious emissions ppendix nalytical mmse olution hird rder band dpd derive analytical minimum error solution shown used reference solution simulations section first define error signal since ideal predistortion signal would zero thus optimization means minimizing power error signal error signal reads fig integrated power lna input mhz output power using ninth order dpd processing memory depth equal per dpd snl basis function commercial mobile operating lte band used measurements duplexer filter attenuation assumed band level compared dbm output power without dpd additionally dpd integrated power less effective noise floor dbm output power without dpd already noise floor power level vii onclusions article novel digital predistortion dpd solution proposed suppressing unwanted spurious emissions spectrum access novel decorrelation based adaptive parameter learning memoryless model parameters denotes dpd coefficient optimized baseband equivalents two component carriers statistical expectation assuming component carrier signals statistically independent ignoring vanishingly small terms reads ieij used shorthand notation differentiating respect yields setting zero solving yields optimal mmse dpd parameter given inv denotes operator concludes proof ppendix nalytical ifth rder nverse based band dpd olution derive order inverse solution dpd used reference solution simulations section output memoryless order polynomial model polynomial coefficients composite baseband equivalent input signal given direct substitution baseband equivalent distortion term located three times frequency extracted reads yim stemming signal structure dpd injection signal composed three basis functions form case inverse based subband dpd basis functions multiplied proper coefficients distortion terms subband output order five cancelled thus incorporating dpd processing yet arbitrary coefficients composite baseband equivalent input signal reads fif fif fif inv inv fif inv fif subscript inv coefficients emphasizing inverse based solution substituting extracting third fifth order terms yields inv inv inv inv inv easily obtain inverse coefficients null third terms yielding inv inv concludes derivation eferences parkvall furuskar dahlman evolution lte toward ieee commun vol february lte evolved universal terrestrial radio access user equipment radio transmission reception release june bassam chen helaoui ghannouchi transmitter architecture carrier aggregation systems ieee microw vol july park wallen khayrallah carrier aggregation design challenges terminals ieee commun vol way forward intraband transmitter tech intraband unwanted tech gandhi greenstreet quintal digital radio strategies provide benefits small cell base stations tech texas instruments may guan zhu green communications digital predistortion wideband power amplifiers ieee microw vol reference sensitivity requirements two tech international telecommunication union radio communication sector recommendation unwanted emissions spurious domain chao wenhui cao yan guo zhu digital compensation transmitter leakage carrier aggregation applications fpga implementation ieee trans microw theory vol dec kiayani abdelaziz anttila lehtinen valkama digital mitigation receiver desensitization carrier aggregation fdd transceivers ieee trans microw theory vol nov bassam ghannouchi helaoui digital predistortion architecture concurrent transmitters ieee trans microw theory vol roblin quindroit naraharisetti gheitanchi fitton concurrent linearization ieee microw vol liu yan asbeck concurrent digital predistortion single feedback loop ieee trans microw theory vol may roblin myoung chaillot kim fathimulla strahler bibyk predistortion linearization power amplifiers ieee trans microw theory vol kim roblin chaillot xie generalized architecture digital predistortion linearization technique ieee trans microw theory vol bassam helaoui ghannouchi digital predistorter transmitters ieee trans microw theory vol abdelaziz anttila mohammadi ghannouchi valkama power amplifier linearization carrier aggregation mobile transceivers ieee international conference acoustics speech signal processing may abdelaziz anttila cavallaro bhattacharyya mohammadi ghannouchi juntti valkama digital predistortion reducing power amplifier spurious emissions flexible radio international conference cognitive radio oriented wireless networks june abdelaziz tarver anttila martinez valkama cavallaro digital predistortion noncontiguous transmissions algorithm development prototype implementation asilomar conference signals systems computers abdelaziz anttila wyglinski valkama digital predistortion mitigating spurious emissions spectrally agile radios ieee commun vol march anttila abdelaziz valkama wyglinski digital predistortion unwanted emission reduction ieee trans vol tehrani cao afsardoost eriksson isaksson fager comparative analysis tradeoff power amplifier behavioral models ieee trans microw theory vol june qian yao huang feng digital predistortion algorithm power amplier linearization ieee trans vol hoffmann iterative algorithms gram schmidt orthogonalization computing vol dsp powered ldo mobile applications ieee circuits vol jan lte evolved universal terrestrial radio access system scenarios release may mahmoud abdelaziz received degree honors degree electronics electrical communications engineering cairo university egypt currently pursuing doctoral degree tampere university technology finland works researcher department electronics communications working communication systems signal processing engineer newport media well companies wireless industry research interests include statistical adaptive signal processing flexible radio transceivers wideband digital cognitive radio systems lauri anttila received degree tech degree honors electrical engineering tampere university technology tut tampere finland currently senior research fellow department electronics communications engineering tut research interests signal processing wireless communications particular radio implementation challenges cellular radio radio flexible duplexing techniques transmitter receiver linearization peer reviewed articles areas well two book chapters chance tarver received degree electrical engineering louisiana tech university ruston degree electrical computer engineering rice university houston currently student department electrical computer engineering rice university research interests include software defined radio signal processing wireless communications kaipeng received degree physics nanjing university nanjing china degree electrical computer engineering rice university currently candidate department electrical computer engineering rice university houston texas research interests include digital signal processing parallel computing gpgpu multicore cpu radios massive mimo systems joseph cavallaro received degree university pennsylvania philadelphia degree princeton university princeton degree cornell university ithaca electrical engineering bell laboratories holmdel joined faculty rice university houston currently professor electrical computer engineering research interests include computer arithmetic dsp gpu fpga vlsi architectures applications wireless communications academic year served national science foundation director prototyping tools methodology program nokia foundation fellow visiting professor university oulu finland continues affiliation adjunct professor currently director center multimedia communication rice university fellow ieee member ieee sps design implementation signal processing systems ieee cas circuits systems communications currently associate editor ieee transactions signal processing ieee signal processing letters journal signal processing systems signal processing communications symposium ieee global communications conference ieee international conference systems architectures processors asap glsvlsi finance chair ieee globalsip conference tpc ieee sips workshop member ieee cas society board governors mikko valkama born pirkkala finland november received degrees honors electrical engineering tampere university technology tut finland respectively received best thesis finnish academy science letters dissertation entitled advanced signal processing wideband receivers models algorithms working visiting researcher communications systems signal processing institute sdsu san diego currently full professor department department electronics communications engineering tut finland general research interests include communications signal processing estimation detection techniques signal processing algorithms software defined flexible radios cognitive radio radio radio localization mobile cellular radio digital transmission techniques different variants multicarrier modulation methods ofdm radio resource management mobile networks
3
characteristic matrices trellis reduction convolutional codes masato tajima senior member ieee may abstract basic properties characteristic matrix convolutional code investigated convolutional code regarded linear block code since corresponding scalar generator matrix gtb kind cyclic structure associated characteristic matrix also cyclic structure basic properties characteristic matrix obtained next using derived results discuss possibility trellis reduction given convolutional code cases find scalar generator matrix equivalent gtb based characteristic matrix case polynomial generator matrix corresponding reduced reduced using appropriate transformations trellis reduction original convolutional code realized many cases polynomial generator matrix corresponding monomial factor column reduced dividing column factor note transformation corresponds cyclically shifting associated code subsequence path regarded code sequence left thus allow partial cyclic shifts path trellis reduction accomplished index terms convolutional codes trellis characteristic matrix cyclic shift trellis reduction tajima graduate school science engineering university toyama gofuku toyama japan masatotjm manuscript received april revised august paper presented part ieice technical committee conference march journal latex class files vol august characteristic matrices trellis reduction convolutional codes ntroduction trellis representations linear block codes studied great interest subsequently trellises linear block codes received much attention given linear block code exists unique minimal conventional trellis trellis simultaneously minimizes measures trellis complexity however trellises property minimality trellises depends measure used general complexity trellis may much lower minimal conventional trellis many contributions subject including works strong influence subsequent studies remarkable progress made koetter vardy paper showed linear block code length full support exists list characteristic generators characteristic matrix minimal minimal trellises obtained different method producing trellises proposed nori shankar used bcjr construction works investigated weaver particular noting characteristic matrix given code necessarily unique refined generalized previous works recent works provide research subject hand convolutional codes proposed wolf representations block codes introduced solomon van tilborg abbreviated technique convolutional code used construct block code without loss rate connection subject also many works including since convolutional code identified linear block code results trellises linear block codes used particular think characteristic matrix given convolutional code paper first investigate characteristic matrix convolutional code based derived results discuss possibility trellis reduction given convolutional code outline rest paper follows section review basic notions needed paper section iii investigate basic properties characteristic matrix convolutional code convolutional code generator matrix regarded linear block code scalar generator matrix denoted gtb constructed using coefficients appear polynomial expansion see gtb kind cyclic structure shown characteristic span list associated characteristic matrix consists basic spans right cyclic shifts basic properties characteristic matrix derived section deal transformations discuss relationship transformations corresponding scalar generator matrices gtb see dividing column monomial factor corresponds cyclically shifting column subsequence gtb left whereas multiplying column monomial corresponds cyclically shifting column subsequence gtb right properties essentially used trellis reduction discussed section section discuss possibility trellis reduction given convolutional code identify code block code stated think characteristic matrix consider case characteristic generators consist basic generators right cyclic shifts generate code see characteristic generators form scalar generator matrix associated polynomial generator matrix another convolutional code case constraint length obtained generator matrix smaller original one trellis reduction realized even kind reduction possible cases newly obtained generator matrix contains monomial factor column possibility generator matrix reduced sweeping monomial factor column note operation corresponds cyclically shifting corresponding code subsequence left way trellis reduction accomplished also present trellis reduction method high rate codes uses reciprocal dual encoder remark trellis section length important parameter proposed method restricted convolutional codes short moderate section length give upper bound section length evaluating span lengths characteristic generators finally conclusions provided section reliminaries begin basic notions needed paper underlying field assumed let linear block code set indices codeword denoted codeword expressed also regarded time axis trellises since trellises journal latex class files vol august considered paper convenient identify ring integers modulo hence dealing trellises index arithmetic implicitly performed modulo notion span fundamental trellis theory given codeword span denoted semiopen interval corresponding closed interval contains nonzero positions due cyclic structure time axis adopt following interpretation intervals define call intervals conventional circular otherwise connection construction minimal trellises koetter vardy introduced notion characteristic generator denote cyclic shift left positions similarly denote cyclic shift right positions let basis form code characteristic generator pair consisting codeword span nonzero set characteristic generators given understanding assume full support characteristic matrix matrix elements rows definition implies refer characteristic matrix associated spans taken account note basis form necessarily unique hence may uniquely determined hand set spans denoted accompanied ordering uniquely determined code called characteristic span list called characteristic span order clarify fact weaver introduced notion characteristic pair definition generating set represents associated spans paper basically follow definition weaver order emphasize fact characteristic matrix inherently assumes associated spans leave term characteristic matrix definition thus define follows definition definition let linear block code support characteristic matrix characteristic span list defined pair properties generates span distinct distinct exist exactly row indices ali bli remark property derived lemma related remarks also property derived proof theorem following danger confusion shall use terms characteristic matrix characteristic matrix span list interchangeably iii haracteristic atrices tail iting onvolutional ode let polynomial generator matrix size denote corresponding polynomial check matrix assumed canonical consider standard trellis sections convolutional code defined max assumed memory lengths respectively condition restriction encoder starts ends state paths trellis start end state admissible call paths paths let set paths following call convolutional code section length defined section danger confusion omit phrase section length regarded linear block code length simplify notations identified denoted simply let journal latex class files vol august polynomial expansion matrices scalar generator matrix given size hence say convolutional code generated gtb following call gtb generator matrix abbreviated tbgm associated convolutional code defined simply tbgm associated computation characteristic matrices koetter vardy given algorithm compute characteristic matrix linear block code consider convolutional code generated gtb note equivalent periodic structure period using property characteristic matrix computed efficiently let code generated gtb let basis form code characteristic matrix defined follows since gtb equivalent similarly general xin xin journal latex class files vol august hence obtained thus shown following proposition characteristic matrix convolutional code generated gtb given corollary relation holds characteristic matrix given proof assumption similarly proposition follows remark many practical applications characteristic matrix convolutional code obtained based corollary example consider convolutional code section length defined associated tbgm given gtb case journal latex class files vol august applying matrices characteristic matrix obtained follows note spans connected whereas spans connected hence see characteristic matrix obtained simply applying structure characteristic span list let characteristic matrix span list characteristic matrix span list remark using repeatedly relation see characteristic matrix span list consider convolutional code generated gtb set since equivalent holds thus following lemma let convolutional code generated gtb characteristic matrix span list also characteristic matrix span list let given since characteristic span list uniquely determined coincide ordering proposition characteristic span list convolutional code generated gtb consists set basic spans journal latex class files vol august proof suppose spans sorted take notice following set spans transformed since coincide ordering holds hence similarly set spans transformed reason holds hence continuing argument journal latex class files vol august example consider convolutional code section length defined rate encoder using associated tbgm gtb charactreristic matrix computed follows see characteristic span list consists set basic spans right cyclic shifts positions counting characteristic matrices recall definition characteristic matrix given code basis form code note necessarily unique hence uniquely determined respect subject weaver discussed relationship characteristic span list number characteristic matrices let characteristic span list define set follows represents number spans included specified span weaver proved following lemma weaver let characteristic span exist characteristic generators span fact derived next observation let consider two characteristic generators spans respectively also characteristic generator span consider convolutional code generated gtb already shown characteristic span list consists set basic spans hence suffices consider spans purpose counting number characteristic matrices define follows journal latex class files vol august also let following exist characteristic generators span exist characteristic generators span exist characteristic generators span result degree freedom related spans given since degree freedom common blocks spans overall degree freedom related becomes thus shown following proposition let convolutional code generated gtb let exist characteristic matrices example continued take notice first three rows characteristic matrix hence exist characteristic matrices span lengths characteristic generators let span codeword span length defined number elements closed interval span alone referred without specifying accompanied codeword use term span length span let characteristic span list convolutional code generated gtb suppose spans sorted theorem holds due structure see proposition side equality becomes derivation also used relation journal latex class files vol august replacing respectively equality reduces thus shown following proposition let characteristic span list convolutional code generated gtb denote set basic spans sum span lengths spans given example continued also ransformations orresponding tbgm section discuss relationship transformations generator matrix corresponding tbgm gtb consider following transformations dividing jth column multiplying jth column adding ith row multiplied jth row implicit transformations next section see transformations play essential role trellis reduction convolutional codes dividing column suppose jth column monomial factor hence form assume without loss generality let journal latex class files vol august polynomial expansion comparing entries equations dividing first column let resulting matrix polynomial expansion consider tbgm associated denoted note gtb regarded matrices blocks columns view entries relation journal latex class files vol august obtained cyclically shifting first column block left positions thus following proposition regard gtb matrix blocks columns suppose jth column monomial factor dividing jth column equivalent cyclically shifting jth column block gtb left positions let convolutional code section length defined note codeword consists blocks components let cyclically shift jth component block left positions denote set resulting modified codewords already shown obtained cyclically shifting jth column block left positions hence generated words represented convolutional code defined multiplying column consider multiplication jth column following assume without loss generality hence resulting matrix form accordingly polynomial expansion becomes journal latex class files vol august consider tbgm associated denoted note gtb consist blocks columns view entries see obtained cyclically shifting first column block right positions thus following proposition regard gtb matrix blocks columns suppose multiplying jth column equivalent cyclically shifting jth column block gtb right positions remark order defined condition required let convolutional code section length generator matrix let previous section case however jth component block cyclically shifted right positions shown obtained cyclically shifting jth column block right positions hence generated words represented convolutional code defined consider addition ith row multiplied jth row denoted following assume without loss generality let first row also let polynomial expansion size polynomial expansion becomes note first row gtb expressed hence right cyclic shift positions coincides row gtb row second row within matrix note elementary row corresponds addition operation thus following proposition suppose consider operation let resulting matrix associated tbgm equivalent taking consideration proposition let introduce useful notion let convolutional codes section length defined respectively denote memory lengths respectively max let gtb tbgm associated respectively see equivalent leads following definition definition gtb equivalent say journal latex class files vol august thus following proposition convolutional code defined represented convolutional code defined vice versa proof direct consequence definition rellis eduction onvolutional odes section show convolutional code short moderate section length associated trellis reduced begin example example trellis reduction fig convolutional code defined consider convolutional code defined section length set corresponding trellis shown paths start end state paths valid codewords set paths since polynomial expansion tbgm associated given gtb based gtb characteristic matrix computed follows journal latex class files vol august choosing rows let see rows linearly independent thus generate equivalent gtb note consists first row right cyclic shifts positions accordingly regarded tbgm associated hence equally represented convolutional code defined remark constraint length greater hand observe first column factor dividing first column note transformation corresponds cyclically shifting first component branch path left two branches proposition transformation original convolutional code represented using trellis associated well trellis shown example take notice path starts ends state cyclically shifting first component branch left two branches becomes see modified path represented path starts ends state example shows cases given convolutional code represented using reduced trellis less state complexity allow partial cyclic shifts path fig convolutional code defined trellis reduction convolutional codes argument previous section though presented terms specific example entirely general method directly extended general case let section iii denote constraint length consider convolutional code section length defined trellis reduction procedure becomes follows procedure trellis reduction compute characteristic matrix based tbgm gtb consists rows right cyclic shifts integer multiple choosing rows form properties rows linearly independent thus generate consists rows right cyclic shifts integer multiple iii direct reduction regarded tbgm associated another generator matrix let constraint length trellis reduction realized indirect reduction even cases monomial factor jth column possibility reduced dividing jth column resulting matrix denoted let constraint length original trellis reduced journal latex class files vol august cyclically shifting jth component branch path left branches set modified paths equally represented convolutional code defined justified proposition thus trellis reduction accomplished necessarily unique hence necessary try using another characteristic matrix remark row rate codes rather easy find equivalent gtb also row rate codes make easy determine whether reduced stated restrictions selection following proposition number characteristic matrices given defined section fixed number satisfy condition given proof candidate tbgm associated encoder hence consequence structure tbgm example consider convolutional code section lengh defined using associated tbgm gtb characteristic matrix computed follows choosing rows define see equivalent gtb also see tbgm associated note constraint length reduced compared hand observe second column factor dividing column transformation corresponds cyclically shifting second component branch path left one branch proposition result modified paths represented using trellis thus trellis reduction accomplished remark stated necessarily unique example characteristic matrix journal latex class files vol august used trellis reduction realized using procedure using appropriate characteristic matrices reduction method also applied following cases trellis reduction using reciprocal dual encoder high rate codes may monomial factor columns easily determined whether reduced cases useful consider reciprocal dual encoder associated reciprocal dual encoder defined follows let section iii also let corresponding check matrix size reciprocal dual encoder obtained substituting multiplying ith row resulting matrix degree ith row definition mceliece lin let gscalar scalar generator matrix terminated convolutional code defined gscalar given gscalar matrix journal latex class files vol august repeatedly appears vertical slice gscalar except initial final transient sections called matrix module trellis module trellis associated gscalar corresponds gscalar form minimal state complexity profile consisting dimensions state spaces meaning obtaining reciprocal dual encoder based following result proposition tang lin consider minimal trellis module associated reciprocal dual encoder state complex profiles identical hence order determine whether reduced compute reciprocal dual encoder associated connection encoder associated reciprocal dual encoder following proposition let gtb tbgm associated check matrix corresponding obtained tbgm denoted associated reciprocal dual encoder proof let polynomial expansion memory length matrices known check matrix corresponding gtb given size hand let polynomial expansion tbgm associated denoted defined size take notice ith row see row identical ith row similarly ith row identical ith row due cyclic structures similar correspondences hold successively hence given row permutation procedure computing obtained based proposition procedure computing journal latex class files vol august compute characteristic matrix dual code based consists rows right cyclic shifts integer multiple choosing rows form properties rows linearly independent thus generate consists rows right cyclic shifts integer multiple iii let polynomial matrix whose tbgm note equivalent gtb respectively hence check matrix corresponding follows proposition reciprocal dual encoder associated necessarily unique hence necessary try using another characteristic matrix following example trellis reduction realized using reciprocal dual encoder example consider rate convolutional code section length generator matrix based associated tbgm gtb characteristic matrix computed follows span list given choosing rows let journal latex class files vol august span list given see rows linearly independent thus generate equivalent gtb also note tbgm associated hence original convolutional code equally represented convolutional code defined observe constraint length equal also notice second column factor however reduced dividing column general difficult tell possibility reduction looking entries compute reciprocal dual encoder associated begin reciprocal dual encoder associated given based characteristic matrix computed follows span list given note span list next choosing rows let span list given span list given see equivalent thus scalar check matrix corresponding also note tbgm associated already know tbgm associated hence proposition reciprocal dual encoder associated given journal latex class files vol august observe factor first column factor second column sweeping factors corresponding columns constraint length reduced one fact implies constraint length also reduced following show reduction actually realized purpose check matrix corresponding used let generator matrix corresponding check matrix convolutional code respectively following relation denoted shown reduced simultaneously reduction possible relation retained whole reduction process apply method case consideration step add first row multiplied second row proposition transformation result step divide second column divide first third columns step multiply third column divide third column step note basic using decomposition equivalent basic matrix obtained note proposition reduction process except transformations second column divided whereas third column multiplied accordingly path let cyclically shift second component branch left one branch cyclically shift third component branch right one branch modified paths represented convolutional code defined see propositions trellis shown example take information sequence corresponding path ugtb cyclically shifting second component branch left one branch cyclically shifting third component branch right one branch see path starts ends state fig convolutional code defined remark remark argument assumed equivalent gtb equivalence checked beforehand general however relatively large high rate codes hence preferable equivalence gtb derived without checking beforehand actually equivalence derived equivalence using result weaver theorem see appendix journal latex class files vol august relation trellis reduction section length proposed trellis reduction method section length important parameter actually method effective convolutional codes short moderate section length span lengths characteristic generators increase grows see section already shown trellis generator matrix reduced case consider trellis time however set gtb given gtb note generator gtb span assigned natural characteristic matrix computed follows thus set basic spans given manner observe span lengths spans respect two cases first row used basic generator identical gtb second row used basic generator span lengths rows greater facts mean either case trellis reduction realized using proposed method hand example implies upper bound estimated comparing span lengths generators generators gtb let characteristic matrix convolutional code section length already know associated span list consists set basic spans also sum span lengths spans given proposed method consists generators span viewpoint corresponds choosing spans accordingly sum span lengths spans denoted approximated journal latex class files vol august hand consider gtb generator span assigned natural manner span list consists set basic spans evaluate sum span lengths spans denoted let degree ith row take notice first block rows gtb span length ith row approximated hence constraint length since trellis reduction realized case consists generators short span length take inequality criterion trellis reduction estimate upper bound using inequality several concrete cases show condition observe convolutional codes presented section satisfy condition also rate convolutional code discussed previous section satisfies condition onclusion paper derived several basic properties characteristic matrix convolutional code shown characteristic span list consists basic spans right cyclic shifts using derived results shown trellis associated given convolutional code reduced cases candidates trellis reduction taken generator matrices tables chapter principle example rate encoders section chosen table hand good convolutional encoders obtained given rate optimal encoder memory length produces largest minimum distance section length applied proposed reduction method encoders see table result example obtained octal notation generator matrices used similarly obtained note listed table finally remark proposed trellis reduction method depends choice characteristic matrix given convolutional code though number characteristic matrices examined rather restricted proposition method fully constructive also detailed condition trellis reduction realized clarified journal latex class files vol august ppendix roof equivalence gtb first prove following proposition let section iii consider convolutional code generated gtb corresponding dual code let characteristic matrix span list also let characteristic matrix span list let submatrices respectively consists rows whereas consists rows denote span lists respectively assume following consists rows right cyclic shifts integer multiple span include spans except iii rows linearly independent thus generate consists rows right cyclic shifts integer multiple span include spans except satisfy consists spans whose reverse rows linearly independent thus generate equivalent gtb remark consists generators short span length probable condition holds similarly consists generators short span length probable condition holds proof follows common characteristic matrices similarly follows common characteristic matrices also dual selection definition result theorem rank rank let rank trellises based dual iii rank hence rank let back example example code generated gtb generated dual code considered characteristic matrix whereas characteristic matrix note neither unique observed relation inclusion associated span lists next take notice matrices submatrices respectively corresponding span lists given note following span include spans except span include spans except moreover consists spans whose reverse actually reversing spans see spans facts show conditions proposition satisfied replaced respectively hence equivalence gtb derived remark theorem holds pair characteristic matrix corresponding dual one see hand pair computed may duality relation however common characteristic matrices common characteristic matrices well hence theorem applied case acknowledgment author would like thank heide valuable comments duality trellises journal latex class files vol august eferences anderson hladik optimal circular viterbi decoder bounbed distance criterion ieee trans vol bahl cocke jelinek raviv optimal decoding linear codes minimizing symbol error rate ieee trans inform theory vol march calderbank forney vardy minimal trellises golay code ieee trans inform theory vol july conti boston algebraic structure linear trellises ieee trans inform theory vol may forney convolutional codes algebraic structure ieee trans inform theory vol structural analysis convolutional codes via dual codes ieee trans inform theory vol july coset binary lattices related codes ieee trans inform theory vol appendix profiles trellis complexity linear block codes ieee trans inform theory vol weaver linear trellises characteristic generators ieee trans inform theory vol characteristic generators dualization trellises ieee trans inform theory vol forney local irreducibility trellises ieee trans inform theory vol johannesson zigangirov fundamentals convolutional coding new york ieee press koetter vardy structure trellises minimality basic principles ieee trans inform theory vol kschischang sorokine trellis structure block codes ieee trans inform theory vol lin shao general structure construction tail biting trellises linear block codes proc june wolf tail biting convolutional codes ieee trans vol mceliece bcjr trellis linear block codes ieee trans inform theory vol july mceliece lin trellis complexity convolutional codes ieee trans inform theory vol muder minimal trellises block codes ieee trans inform theory vol nori shankar unifying views trellis constructions linear block codes ieee trans inform theory vol riedel map decoding algorithm convolutional codes use reciprocal dual codes ieee select areas vol shany ery linear trellises bound applications codes ieee trans inform theory vol july shao lin fossorier two decoding algorithms tailbiting codes ieee trans vol solomon van tilborg connection block convolutional codes siam appl vol anderson johannesson optimal encoders short trellises ieee trans inform theory vol tajima okino miyagoshi trellis complexity analysis convolutional codes using reciprocal dual codes proc japanese simultaneous reduction convolutional codes using shifted proc tajima okino tailbiting convolutional codes proc tajima trellis reduction convolutional codes using characteristic matrices cyclically shifted ieice technical report japanese march tang lin convolutional codes low trellis complexity ieee trans vol tang lin filho minimal trellis modules equivalent convolutional codes ieee trans inform theory vol weaver minimality duality trellises linear codes dissertation submitted university kentucky lexington kentucky usa april weiss bettstetter riedel code construction decoding parallel concatenated codes ieee trans inform theory vol masato tajima born toyama japan august received eng degrees electrical engineering waseda university tokyo japan respectively joined electronics equipment laboratory toshiba center engaged research development channel coding techniques applications satellite communication systems department intellectual information systems engineering toyama university first associate professor next professor graduate school science engineering university toyama professor currently professor emeritus university toyama research interests coding theory applications
7
may sparsification sddm matrices applications counting sampling spanning trees david john richard anup may abstract show variants spectral sparsification routines preserve total spanning tree counts graphs kirchhoff theorem equivalent determinant graph laplacian minor equivalently sddm matrix analyses utilizes combinatorial connection bridge statistical leverage scores effective resistances analysis random graphs janson combinatorics probability computing leads routine quadratic time sparsifies graph edges ways preserve determinant distribution spanning trees provided sparsified graph viewed random object extending algorithm work schur complements approximate choleksy factorizations leads algorithms counting sampling spanning trees nearly optimal dense graphs give algorithm computes approximation determinant sddm matrix constant probability time first routine graphs outperforms routines computing determinants arbitrary matrices also give algorithm generates time spanning tree weighted undirected graph distribution total variation distance distribution introduction determinant matrix fundamental quantity numerical algorithms due connection rank matrix interpretation volume ellipsoid corresponding matrix graph laplacians core spectral graph theory spectral algorithms theorem gives determinant minor obtained removing one row corresponding column equals total weight spanning trees graph formally weighted graph vertices det graph laplacian total weight spanning trees vector null space need drop last row georgia institute technology email ddurfee massachusetts institute technology email jpeebles georgia institute technology email rpeng georgia institute technology email column work precisely definition sddm matrices numerical analysis study random spanning trees builds directly upon connection tree counts determinants also plays important role graph theory much progress development faster spectral algorithms estimation determinants encapsulates many shortcomings existing techniques many nearly linear time algorithms rely sparsification procedures remove edges graph provably preserving laplacian matrix operator turn crucial algorithmic quantities cut sizes rayleigh quotients eigenvalues determinant matrix hand product eigenvalues result worst case guarantee per eigenvalue needed obtain good overall approximation turn leads additional factors number edges needed sparse approximate due amplification error factor previous works numerically approximating determinants without multiplications usually focus running time give errors additive log determinant estimate multiplicative error exp determinant lack sparsification procedure also led running time random spanning tree sampling algorithms limited sizes dense graphs generated intermediate steps paper show slight variant spectral sparsification preserves determinant approximations much higher accuracy applying guarantees individual edges specifically show sampling edges distribution given leverage scores weight times effective resistances produces sparser graph whose determinant approximates original graph furthermore treating sparsifier random object show spanning tree distribution produced sampling random tree random sparsifier close spanning tree distribution original graph total variation distance combining extensions algorithms sparsification based algorithms graph laplacians leads quadratic time algorithms counting sampling random spanning trees nearly optimal dense graphs sparsification phenomenon surprising several aspects also experimentally complete graph edges necessary preserve determinant one first graph sparsification phenomenons requires number edges proof correctness procedure also hinges upon combinatorial arguments based theorem ways motivated result janson complete graphs instead common bound based proofs furthermore algorithm appears far delicate spectral sparsification requires global control number samples high quality estimates resistances running time bottleneck theorem holds constant probability nonetheless use procedure determinant estimation spanning tree generation algorithms still demonstrates serve useful algorithmic tool results def use denote weighted multigraphs denote weighted degree vertex weight spanning tree weighed undirected multigraph def def use denote total weight trees result described following theorem key sparsification theorem given graph parameter compute time graph edges constant probability implies graphs sparsified manner preserves determinant albeit density show make sparsification routine errors estimating leverage scores scheme adapted implicitly sparsify dense objects explicit access particular utilize tools rejection sampling high quality effective resistance estimation via projections extend routine give sparsification algorithms schur complements intermediate states gaussian elimination graphs using ideas sparsification random walk polynomials use extensions routine obtain variety algorithms built around graph sparsifiers two main algorithmic applications follows achieve first algorithm estimating determinant sddm matrix faster general purpose algorithms matrix determinant problem since determinant sddm corresponds determinant graph laplacian one removed theorem given sddm matrix routine detapprox time outputs det high probability crucial thing note distinguishes guarantee similar results give multiplicative approximation det much stronger giving multiplicative approximation log det work typically tries achieve sparsifiers construct also approximately preserve spanning tree distribution leverage yield faster algorithm sampling random spanning trees new algorithm improves upon current fastest algorithm general weighted graphs one wishes achieve slightly variation distance theorem given undirected weighted graph routine approxtree outputs random spanning tree distribution expected time total variation distance distribution prior work graph sparsification general sense graph sparsification procedure method taking potentially dense graph returning sparse graph called sparsifier approximately still many properties original graph introduced preserving properties related minimum spanning trees edge connectivity related problems defined notion cut sparsification one produces graph whose cut sizes approximate original graph defined general notion spectral sparsification requires two graphs laplacian matrices approximate quadratic edges original graph yielding particular spectral sparsification samples graph whose quadratic hence within factor implies determinants approximate within useful perspective preserving determinant since one would need samples edges get constant factor approximation one could instead exactly compute determinant sample spanning trees using exact algorithms runtime results sparsification undirected graphs recently defined useful notion sparsification directed graphs along nearly linear time algorithm constructing sparsifiers notion sparsification determinant estimation exactly calculating determinant arbitrary matrix known equivalent matrix multiplication approximately computing log determinant uses identity log det log log whenever one find matrix log log det log log det quickly special case approximating log determinant sdd matrix applies identity recursively matrices sequence ultrasparsifiers inspired recursive preconditioning framework obtain running time polylog estimating log determinant additive error estimates log determinant arbitrary positive definite matrices runtime depends linearly condition number matrix contrast work first know gives multiplicative approximation determinant rather log despite achieving much stronger approximation guarantee algorithm essentially runtime graph dense note also one wishes conduct apples apples comparison setting value small enough order match approximation guarantee algorithm would achieve runtime bound polylog never better runtime bad factor two graphs laplacian matrices approximate quadratic forms cut sizes also approximate specifically take diagonal prove sufficient conditions log determinant quickly approximated choice simplification runtime using substitution gives roughly multiplicative sampling spanning trees previous works sampling random spanning trees combination two ideas could generated using random walks could mapped random integer via kirchoff matrix tree theorem former leads running times form latter approach led routines run time matrix multiplication exponent approaches combined algorithms kelner madry madry straszak tarnawski algorithms based simulating walk efficiently parts graphs combining graph decompositions handle expensive portions walks globally due connection based spanning tree sampling algorithms routines often inherent dependencies edge weights furthermore dense graphs running times still worse time routines previous best running time generating random spanning tree weighted achieved works combining recursive graph procedure similar used recent time algorithms spectral sparsification ideas achieving runtime algorithm time produce tree distribution away takes distribution slower nearly factor algorithm given paper algorithm viewed natural extension approach instead preserving probability single edge chosen random spanning tree instead aim preserve entire distribution spanning trees sparsifier also considered random variable allow significantly reduce sizes intermediate graphs cost higher total variation distance spanning tree distributions characterization random spanning tree present previous works believe interesting direction combine sparsification procedure algorithms organization section introduce necessary notation previously known fundamental results regarding mathematical objects work throughout paper section give sketch primary results concentration bounds total tree weight specific sampling schemes section leverages concentration bounds give quadratic time sparsification procedure edges general graphs section uses random walk connections extend sparsification procedure schur complement graph section utilizes previous routines achieve quadratic time algorithm computing determinant sddm matrices section combines results modifies previously known routines give quadratic time algorithm sampling random spanning trees low total variation distance section extends concentration bounds random samplings arbitrary tree fixed necessary error accounting error estimating determinant algorithm simplification also assuming regime analyze algorithm thus regime compare two random spanning tree sampling algorithm section proves total variation distance bounds given random sampling tree algorithm background graphs matrices random spanning trees goal generating random spanning tree pick tree probability proportional weight formalize following definition definition distribution trees let probability distribution refer distribution trees graph unweighted corresponds uniform distribution refer distribution graph unweighted corresponds uniform distribution furthermore manipulate probability particular tree chosen extensively denote probabilities aka def laplacian graph matrix specified def luv write wish indicate graph laplacian corresponds context clear graph define sum weights edges vertices laplacians natural objects consider dealing random spanning trees due matrix tree theorem states determinant corresponding vertex removed total weight spanning trees denote removal vertex index vertex removed affect result usually work furthermore use det denote determinant matrix work mostly graph laplacians also useful define positive determinant remove last row column using notation matrix tree theorem stated det measure distance two probability distributions total variation distance definition given two probability distributions index set total variation distance given def let graph edge write denote graph obtained contracting edge identifying two endpoints deleting self loops formed resulting graph write denote graph obtained deleting edge extend definitions refer graph obtained contracting edges deleting edges respectively also subset vertices use denote graph induced vertex letting edges associated schur complement effective resistances leverage scores matrix tree theorem also gives connections another important algebraic quantity def effective resistance two vertices quantity formally given ref indicator vector everywhere else via adjugate matrix shown effective resistance edge precisely ratio number spanning trees number ref total weight trees contain edge spanning trees contain given ref quantity called statistical leverage score edge denote fundamental component many randomized algorithms sampling sparsifying graphs matrices fact fraction trees containing also gives one way deriving sum quantities fact foster theorem graph resistance ref turn statistical leverage scores estimated using linear system solves random projections simplicity follow abstraction utilized madry straszak tarnawski except also allow intermediate linear system solves utilize sparsifier instead original graph lemma theorem let graph edges every find min log time embedding effective resistance metric high effective resistance satisfying probability allows one compute estimate ref specifically vertex embedding associated explicitly stored log given pair vertices estimate takes log time compute embedding provided one thinks edge weight representing parallel edges equivalently counts spanning trees multiplicity according weight schur complements applications utilize sparsification algorithms recursions based schur complements partition vertices denote using partitions corresponding graph laplacian blocks denote using indices subscripts schur complement onto def use interchangeably note always consider vertex set schur complement onto vertex set eliminate except instances need consider schur complements behave nicely respect determinants determinants suggests general structure recursion use estimating determinant fact matrix invertible det det relationship also suggests exist bijection spanning tree distribution product distribution given sampling spanning trees independently graph laplacian formed adding one finally algorithms approximating schur complements rely fact preserve certain marginal probabilities algorithms also use variants facts closely related preservation spanning tree distribution see section details fact let subset vertices graph vertices ref theorem burton premantle set edges graph probability contained random spanning tree det matrix whose entry given standard property schur complements see minor row column indices immediately implies incident vertices putting together fact given partition vertices set edges contained sketch results starting point paper janson gives among things limiting distribution number spanning trees model random graphs concentration result number spanning trees sparsified graph inspired paper algorithmic use sparsification routine motivated sparsification based algorithms matrices related graphs key result prove concentration bound number spanning trees graph sparsified sampling edges probability approximately proportional effective resistance concentration bound let weighted graph vertices edges random subgraph obtained choosing subset edges size uniformly randomly probability subset edges could either single tree union several trees kept bounded precisely since eventually choose treat quantity negligible probability containing fixed tree shown janson lemma tree probability included exp prh denotes product linearity expectation expected total weight spanning trees exp second moment written sum pairs trees due symmetry probability particular pair trees subgraphs depends size intersection following bound shown appendix lemma let graph vertices edges uniformly random subset edges chosen two spanning trees prh exp crux bound second moment janson proof getting handle number tree pairs complete graph edges symmetric alternate way obtain bound number spanning trees also obtained using leverage scores describe fraction spanning trees utilize single edge well known fact random spanning tree distributions edges negatively correlated fact negative correlation suppose subset edges graph easy consequence fact lemma subset edges total weight spanning trees containing given spanning tree combinatorial view edges interchangable complete graph therefore replaced algebraic view terms leverage scores specifically invoking gives following lemma lemma case edges leverage score proven appendix lemma graph edges leverage scores lemma finally prove following bound second moment gives concentration result lemma let graph vertices edges edges statistical leverage scores random subset edges exp exp proof definition second moment prh sum terms size intersection invoking lemma gives exp note trailing term depends pulled outside summation use lemma bound exp upon pulling terms independent substituting gives exp taylor expansion exp exp exp exp exp bound implies set variance becomes less square expectation forms basis key concentration results show section also leads theorem particular demonstrate sampling scheme extends importance sampling edges picked probabilities proportional approximations leverage scores somewhat surprising aspect concentration result difference models model quantity interest number spanning trees particular spanning trees graph approximately normally distributed whereas approximate distribution immediate consequence approximate computing also becomes natural consider speedups random spanning tree sampling algorithms generate spanning tree sparsifier note however hope preserve distribution spanning trees via single sparsifier edges longer present account change support instead consider randomness used generating sparsifier also part randomness needed produce spanning trees section show bounds variance suffices bound distances trees lemma suppose distribution rescaled subgraphs parameter tree graph distribution contain prh distribution given distribution induced satisfies note uniform sampling meets property linearity expectation also check importance sampling based routine discuss section also meets criteria combining running time bounds theorem well time random spanning tree sampling algorithm leads faster algorithm corollary graph vertices algorithm generates tree distribution whose total variation random tree distribution time integration recursive algorithms invocation concentration bound leads speedups previous routines investigate tighter integrations sparsification routine algorithms particular sparsified schur complement algorithms provide natural place substitute spectral sparsifiers ones particular identity det determinant matrix minor suggests approximate det approximating det instead subproblems smaller constant factor also leads recursive scheme total number vertices involved layers log type recursion underlies determinant estimation spanning tree sampling algorithms main difficulty remaining determinant estimation algorithm sparsifying preserving determinant note significantly easier others particular independent set schur complement vertices computed independently furthermore well understood sample complements weighted cliques distribution exceeds true leverage scores lemma procedure takes graph vertices parameter produces time subset vertices along graph tsc exp tsc exp exp lemma holds condition event algorithmic applications able add polynomially small failure probability lemma error bounds bound variance implies number spanning trees concentrated close expectation tsc random spanning tree drawn generated graph randomness sparsification total variation distance random spanning tree true schur complement result design schemes finds subset set produce sparsifier recurse however case accumulation error rapid yielding good approximation determinants instead becomes necesary track accumulation variance recursive calls formally cost sparsifying variance size problem means problem size afford error working since sum layer sum variance per layer cost sparsification step sums per layer random spanning tree sampling algorithm section similarly based careful accounting variance first modify recursive schur complement algorithm introduced coulburn give simpler algorithm braches two ways step section leading high level scheme fairly similar recursive determinant algorithm despite similarities accumulation errors becomes far involved due choice trees earlier recursive calls affecting graph later steps specifically recursive structure determinant algorithm considered analogous allows consider subgraphs layer independent contrast recursive structure random spanning tree algorithm show section analogous traversal tree output solution one subproblem affect input subsequent subproblems dependency issues key difficulty considering variance across levels total variation distance tracks discrepancy trees probability returned overall recursive algorithm probability distribution accounting trees leads bounding variances probabilities individual trees picked turn equivalent weight tree divided determinant graph inverse probability tree picked play simliar role determinant determinant sparsification algorithm described however tracking value requires analyzing extending concentration bounds case arbitrary tree fixed graph sample remaining edges study section prove bounds analogous concentration bounds section incorporate guarantees back recursive algorithm section recursive call may introduce one new vertex determinant preserving sparsification section ultimately prove theorem primary result regarding determinantpreserving sparsification however section devoted proving following general sparsification routine also forms core subsequent algorithms theorem given undirected weighted graph error threshold parameter along routines sampleedgeg samples edge probability distribution well returning corresponding value must satisfy true leverage score approxleverageg returns leverage score edge error specifically given edge returns value routine detsparsify computes graph edges tree count satisfies exp furthermore expected running time bounded calls sampleedgeg approxleverage constant error calls approxleverage error establish guarantees algorithm using following steps showing concentration bounds sketched section holds approximate leverage scores section show via taking limit probabilistic processes analog process works sampling general graph edges varying leverage scores proof section show via rejection sampling high error one sided bounds statistical leverage scores suffice spectral sparsification also initial round sampling instead approximations leverage scores well pseudocode guarantees overall algorithm given section concentration bound approximately uniform leverage scores similar simplified proof outlined section proofs relied uniformly sampling edges edges edges leverage score within multiplicative aka approximately uniform bound prove analog lemma lemma given weighted edges exp similar proof lemma section utilize bounds probability edges chosen using lemma assumption changed bounds affect term changes upper bound total weight trees contain subset edges produce leverage scores glance product change factor substituted naively proof lemma directly would yield additional term exp turn necessitating sample count however note worst case distortion subset upper bound use lemma sums bounds subsets edges still incorporating allows show tighter bound depends similar proof lemma regroup summation subsets bound fraction trees containing subset via via lemma proof heavily utilize fact bound first two steps first treat symmetric product bound total function bound sum using fact first step utilizes concavity product function bound total sum lemma set values proof claim sum maximized consider fixing variables assume without loss generality function symmetric variables locally changing values keeps second term first term becomes greater shows overall summation maximized equal aka upon substitution gives result second step fact case lemma lemma set values proof note transformation must increase sum means sum maximized half gives half leverage scores proof lemma first derive analog lemma bounding total weights pairs trees containing subsets size start bounds applying lemma inner term summation gives bounds gives via lemma substituting lemma gives implies analog lemma duplicate proof lemma similar proof regroup summation invoking lemma get exp incorporated analog lemma gives exp exp exp leaves exp substituting taylor expansion finishes proof generalization graphs arbitrary leverage score distributions first condition easily achieved splitting edge sufficient number times need done explicitly sparsification algorithm furthermore definition statistical leverage score splitting edge copies give copy kth fraction edge leverage score careful splitting ensure second condition require leverage score estimates edges simple approach would compute edges split edge according estimate draw resulting edge set instead utilize algorithm proof technique give sampling scheme equivalent algorithm limiting behavior pseudocode routine algorithm algorithm idealsparsify sample multi edges produce input graph approximate leverage scores sample count initialize empty graph pick edge probability proportional add new weight output exp note sampling scheme replacement probability collision number copies tend sufficiently small covered proof well guarantee show algorithm lemma graph set approximate leverage scores edges graph idealsparsify satisfies exp proof strategy simple claim algorithm statistically close simulating splitting edge large number copies note proofs purely showing convergence statistical processes needed numbers arise proof particular finite first show perturbed become rational numbers lemma graph set edges constant perturbation threshold find graph edge weights rationals entries rational numbers edges proof direct consequence rational numbers everywhere dense perturbing edge weights factor perturbs leverage scores factor total weights trees factor leverage scores integers means exact splitting setting total number split edges multiple common denominator values times specifically edge approximate leverage score becomes copies weight true leverage score particular since splitted graph satisfies condition lemma enables obtain guarantees lemma letting tend proof lemma first show algorithm works graph rational weights approximate leverage scores generated lemma condition established means apply lemma output picking random edges among split copies graph satisfies exp exp ratio second moment affected rescaling graph exp meets requirements expectation variances furthermore rescaled weight single edge picked exp exp exactly algorithm assigns remains resolve discrepancy sampling without replacement probability edge picked twice two different steps total probability duplicate sample bounded give finite bound size probability becomes negligible routine rescaling factor single edge crudely bounded exp exp mine means returned must satisfy exp mine finite result difference causes first second moments become negligible result idealsparsify follows infinitesimal perturbation made rational numbers dense everywhere incorporating crude edge sampler using rejection sampling lemma assumed access leverage scores could computed calls assumed subroutine approxleverageg number edges however roughly associate approxleverageg lemma requires time per call deal aspect proof theorem achieve desired sparsification edges need necessary concentration bounds instead show use rejection sampling take edges drawn approximate leverage scores using cruder distribution require application approxleverageg error expected number edges rejection sampling known technique allows sample distribution instead sampling distribution approximates accept sample specific probability based probability drawing sample specifically suppose given two probability distributions state space constant draw instead drawing accepting draw probability procedure requires lower bound respect order accept draw constant probability need weaker upper bound guarantees guarantees fulfill requirements rejection sampling accept constant fraction draws splitting sufficient number edges ensure drawing split edge occur constant probability specifically sample drawn via following steps draw sample according distribution evaluate values keep sample probability running time approxleverageg ultimately depend value apply algorithmic framework also need perform rejection sampling twice constant error leverage scores extracted true approximate distribution pseudocode routine shown algorithm algorithm detsparsify sampleedgeg approxleverageg sample multi edges produce input graph sample count leverage score approximation error sampleedgeg samples edge probability distribution returning corresponding value bounds rate sampleedgeg approxleverageg returns approximate leverage score edge error initialize empty graph fewer edges sampleedgeg approxleverageg let reject probability approxleverageg let reject probability add new weight exp output first show routine fact sample edges according leverage scores assumed idealsparsify lemma edges sampled probability proportional leverage score estimates given approxleverageg note algorithm time access full distribution proof proof assume known guarantees rejection sampling say following true given distributions sampling edge accepting probability equivalent drawing edge long given distributions sampling edge accepting probability equivalent drawing edge long result need check guarantees sampleedgeg gives generated error show guarantees sampleedgeg gives remains show rejection sampling process still makes sufficiently progress yet also call approxleverageg accurate leverage score estimator many times lemma step probability detsparsify calling approxleverageg probability adding edge least proof proof utilizes fact fact extensively edge picked approxleverageg called probability summing edge probability picking gives hand probability picking edge rejecting follows cancellation set algorithm summing edges gives probability rejecting edge proof theorem lemma implies edges sampled detsparsify probability proportional leverage scores guaranteed approxleverageg therefore apply lemma achieve desired expectation concentration bounds finally lemma implies expect sample edges require call sampleedgeg approxleverageg constant error additionally implies expect make calls approxleverageg error directly invoking theorem leads sparsification algorithm proof theorem consider invoking theorem parameters gives implies second condition equivalent varh chebyshev inequality gives constant probability combining bounds adjusting constants gives overall bound constructing probability distribution sampling edges requires computing constant approximate leverage scores edges sampling proportionally edge giving constant value lemma requires time running time dominated calls made effective resistance oracle error invoking lemma gives cost bounded furthermore lemma holds absorb probability failure constant probability bound another immediate consequence sparsification routine theorem along bounds total variation distance prove section give faster spanning tree sampling algorithm dense graphs plugging sparsified graph previous algorithms generating random spanning trees proof corollary proof theorem invoke theorem parameters giving applying lemma proven section drawing tree according distribution gives total variation distance drawing tree according distribution running time drawing dominated calls made effective resistance oracle error invoking lemma gives cost bounded furthermore lemma holds absorb probability failure total variation distance bound implicitly assume polynomially small use time algorithm draw tree achieves desired running time total variation distance bound implicit sparsification schur complement note determinant sparsification routine theorem requires oracle samples edges approximate distribution resistance well access approximate leverage scores graph suggests variety naturally dense objects random walk matrices schur complements also sparsified ways preserve determinant minor one vertex removed spanning tree distributions latter objects schur complements already shown lead speedups random spanning tree generation algorithms recently furthermore fact schur complements preserve effective resistances exactly means directly invoke effective resistances data structure constructed lemma produce effective resistance estimates schur complements result main focus section efficient way producing samples distribution approximates drawing schur complement probabilities proportional leverage score follow template introduced eliminating subsets vertices turn allows use walk sampling based implicit sparsification similar subsets used schur complement based linear system solvers facilitate convergence iterative methods block formally condition require definition weighted graph subset vertices every weighted degree shown large sets vertices found trimming uniformly random sample lemma lemma instantiated graphs routine almostindependent graph vertices parameter returns expected time subset given subset proceed sample edges via following simple random walk sampling algorithm pick random edge extend endpoints random walks first reach somewhere incorporating scheme determinant preserving sparsification schemes leads guarantees theorem conditioned lemma holding procedure schursparse takes graph subset vertices returns graph expected time distribution satisfies tsc exp tsc exp exp furthermore number edges set anywhere without affecting final bound let subset vertices produced let complement key idea view corresponds walk starts ends intermediate vertices specifically length walk corresponds weight given def check formally defined exactly via taylor expansion based jacobi iteration lemma given graph partition vertices graph formed corresponding walks starting ending stays entirely within weights given equation exactly proof consider schur complement edges leaving result holds trivially otherwise strictly diagonally dominant matrix therefore full rank write diagonal negation entries expand via jacobi series note series converges strict diagonal dominance implies tends zero substituting place gives entries replace make terms trailing summation positive ways form new entries identity based matrix multiplication gives required identity characterization coupled allows sample way short random walk sparsification algorithms lemma given graph subset access statistical leverage scores sampleedgeschur returns edges according distribution expected time per sample furthermore distribution samples edges satisfies every edge corresponding walk algorithm sampleedgeschur samples edge input graph vertices complement onto implicit access leverage scores output corresponding walk probability picked distribution sample edge randomly probability drawn perform two independent random walks endpoints reach vertex let walk output edge corresponding path equation guarantees procedure analogous random walk sampling sparsification scheme main difference terminating condition walks leads removal overhead related number steps walk modification initial step picking initial edge resistance necessary get constant samples limits amount overhead per sample proof first verify indeed probability partitioned walks correspond formally obtain equality note random walk starting vertex total probabilities walks starting ending upper bounded algebraically becomes applying terms edge gives total probability mass starting edge turn total running time since independent step walk takes expected time also value computed time computing products transition probabilities along path instead evaluating summand time total finally need bound approximation compared true leverage scores within constant factor true leverage scores suffices show rsc invoke equivalence effective resistances given fact reverse direction rayleigh monotonicity principle ref substituted expression equation gives sampling procedure immediately combined theorem give algorithms generating approximate schur complements pseudocode routine algorithm algorithm schursparse input graph subset vertices error parameter output sparse schur complement set set build leverage score data structure errors via lemma let detsparsify sampleedgeschur leverageapproxg output proof theorem note choices must ensure equivalent implies schursparse meet conditions ones specifically chosen algorithm also necessary one applications guarantees follow putting quality sampler lemma requirements determinant preserving sampling procedure theorem additionally lemma requires access leverage scores computed lemma time furthermore lemma gives value constant assumption theorem given subset implies expected calls sampleedgeschur require time overheads computation invocations various copies approximate resistance data structures since lemma gives cost bounded approximate determinant sddm matrices section provide algorithm computing approximate determinant sddm matrices minors graph laplacians formed removing one theorem allows sparsify dense graph still approximately preserving determinant graph minor existing algorithm computing determinant good dependence sparsity could achieve improved runtime determinant computation simply invoking algorithm minor sparsified unfortunately current determinant computation algorithms achieve dependent simply reducing edge count directly improve runtime determinant computation instead algorithm give utilize fact det recall determinant matrix minor recursively split matrix specifically partition vertex set based upon routine almostindependent lemma compute schur complements according schursparse theorem algorithm take input laplacian matrix however recursion naturally produces two matrices second laplacian first submatrix laplacian therefore need convert laplacian adding one vertex appropriate edge weights row column sums pseudocode routine algorithm call parameters addrowcolumn algorithm addrowcolumn complete graph laplacian adding one input sddm matrix output laplacian matrix one extra row column let dimension sum entries row call set let output procedure addrowcolumn outputs laplacian obtained one removes added immediately gives det definition give determinant computation algorithm minor graph laplacian get high probability one could use standard boosting tricks involving taking median several estimates determinant obtained fashion algorithm detapprox compute error parameter input laplacian matrix top level error threshold top level graph size output approximate invocation function recursion tree else return weight unique edge graph via lemma almostindependent schursparse value lemma addrowcolumn output detapprox detapprox analysis recursive routine consists bounding distortions incurred level recursion tree turn uses fact number vertices across calls within level total amount across calls within level remain unchanged one level next summarized following lemma bounds error accumulated within one level recursion algorithm lemma suppose given small along laplacian matrices corresponding vertex partition let denote result running schursparse remove block def schursparse conditioning upon high probability calls schursparse probability least det lemma applies matrices fixed respect randomness used invocations schursparse mentioned lemma words applies result running schursparse matrices independent result running matrices lemma immediately bounds error within level independence entire algorithm namely event leverage score estimation calls lemma schursparse succeed corresponds values multiplied call parameter schursparse example main steps determinant approximation algorithm well graphs corresponding applying lemma one layers figure vertices laplacian schursparse addrowcolumn schursparse schursparse figure two layers call structure determinant approximation algorithm detapprox algorithm transition first second layer labeled lemma applying lemma layers recursion tree gives overall guarantees proof theorem running time let number vertices edges current graph corresponding respectively calling almostindependent takes expected time guarantees means total recursion terminates log steps running time note recursive calls total number vertices per level recursion running time level also dominated calls schursparse comes sums note running time also obtained standard analyses recursive algorithms specifically applying running time recurrence form correctness shown running time analysis recursion tree depth log total vertices given level associate level recursion algorithm list matrices given input calls making level recursion level recursion consider product applied matrices refer quantity level notice determinant wish compute algorithm actually outputs suffices prove levels probability failure levels however fact set log top level recursion sufficiently small constants immediately follows lemma minor technical issue lemma gives guarantees conditioned whp event however need invoke lemma logarithmic number times absorb polynomially small failure probability total failure probability without issue standard boosting running log independent instances taking medians give desired high probability statement remains bound variances per level recursion proof lemma result fact det consequently suffices show probability least recall denotes random variable approximate schur complement generated call schursparse using fact calls schursparse independent along assumption apply guarantees theorem obtain det exp exp assumption small approximate exp bound gives applying approximation schursparse gives point apply chebyshev inequality obtain desired result random spanning tree sampling section give algorithm generating random spanning tree weighted graph uses schursparse subroutine ultimately prove theorem order first give time recursive algorithm using schur complement exactly generates random tree distribution given algorithm inspired one introduced variants utilized however necessity extensions reduce number branches recursion two giving efficient algorithmic implementation bijective mapping spanning trees spanning trees set vertices removed independent set note also yields alternative algorithm generating random spanning trees distribution time runtime recursion achieved similar determinant algorithm reduce proportional decrease number vertices every successive recursive call exactly determinant approximation algorithm section previously stated proven section drawing random spanning tree graph running sparsification routine takes total variation distance distribution similar analysis determinant algorithm directly apply bound tree lower levels recursion contribute far much error decreasing proportional rate total variation distance thus need give better bounds variance across level allowing stronger bounds contribution total variation distance entire level accounting total variance difficult due stronger dependence recursive calls specifically input graph depends set edges chosen first recursive call specifically sparsified version accounting dependency require proving additional concentration bounds shown section specifically achieve sampling edges call schursparse might seem contradictory notion sampling instead consider sampling graph edges generated schur complement kept separate could far edges exact time recursive algorithm start showing algorithm samples trees exact distribution via computation schur complements pseudocode algorithm forms basis approximate algorithm faster routine section essentially inserting sparsification steps recursive calls algorithm exacttree take graph output tree randomly distribution input graph output tree randomly generated distribution one edge return partition evenly exacttree probability delete remaining edges exacttree prolongatetree output procedure prolongatetree invoked maps tree schur complement tree back crucially uses property independent set pseudocode given algorithm lemma procedure exacttree generate random tree uniform distribution time algorithm give similar divide conquer approaches two main facts used approaches summarized follows schur complements preserves leverage score original edges operation taking schur complements operation deleting contracting edge associative make use two facts unlike previous approaches every stage need recurse two previous approaches branching factor least four exploiting structure schur complement one eliminates independent set vertices formalize lemma prove lemma need state important property schur complements follows fact recall notation section weighted graph denotes probability trees picked distribution spanning trees algorithm prolongatetree prolongating tree tree input graph splitting vertices independent set tree output tree create distribution set set randomly assign probability proportional contract contracted vertex neighborhood connect edge probability proportional output lemma let graph partition vertices set edges contained induced subgraph edges treated sum gsc new edges added schur complement proof contains cycle therefore assume contain cycle prove induction size contain cycle edge set tree fact corollary holds suppose corollary holds consider know since assumption fact implies tracking edges various layers schur complement leads another layer overhead recursive algorithms circumvented merging edges generating random spanning tree unsplit edge random spanning following direct consequence definition simple graph formed summing weights lemma let overlapping edges procedure sampling random spanning tree probability edge assign original edge original produces spanning tree leads following partition vertices roughly evenly generate tree create edges using lemma lemma subset precisely intersection random spanning tree means decided edges proceed contracting edges deleting edges corresponding let resulting graph let remaining vertices contraction observe independent set complement use another recursive call generate tree utilize fact independent set lift tree efficiently via lemma key idea reducing number recursive calls algorithm partition vertices independent set directly lift tree tree require viewing gsc sum cliques one per vertex plus original edges fact given graph vertex graph induced graph plus weighted complete graph neighbors graph formed adding one edge every pair incident weight degv def weighted degree lemma let graph vertices independent set drawn distribution time prolongatetree returns tree distribution proof running time prolongatetree edges show correctness let represent arising schur complementing vertices one one keeping new edges created process separate represent induced subgraph weighted complete graph neighbors unsplitting procedure lemma function maps tree rest sampling steps maps tree one prove correctness induction size independent set case follows vertex given tree creation first map tree multigraph randomly deciding edge depending weight let lemma therefore contract edges delete edges results star center prolongatetree following decide remaining edges every star graph obtained contracting deleting edges choose exactly one edge randomly according weight process generates random tree star assume lemma holds let key thing note independent set write therefore reasoning take random tree map tree procedure prolongatetree apply inductive hypothesis set map tree prolongatetree implies lemma also remark running time prolongatetree reduced log using dynamic trees abstracted data structure supporting operations rooted forests omit details bottleneck running time procedure fixed show overall guarantees exact algorithm proof lemma correctness follows immediately lemmas running time prolongatetree contracting deleting edges contained takes time note new contracted graph vertex set containing independent set furthermore computing schur complement takes time giving running time recurrence fast random spanning tree sampling using determinant sparsification schur complement next note expensive operation exact sampling algorithm section schur complement procedure accordingly substitute sparse schur complement procedure speed running time however add complication applying line exacttree address need observation schursparse procedure extended distinguish edges original graph schur complement produces lemma procedure schursparse given algorithm modified record whether edge output rescaled copy edge original induced subgraph one new edges generated schur complement gsc proof edges generated random walks via sampleedgeschur whose pseudocode given algorithm produces walk two vertices walk belongs length gsc otherwise give algorithm generating random spanning trees prove guarantees lead main result theorem note splitting line mapping first back tree sparsified multigraph rescaled edges originated tracked separately edges arise new edges involving random walks vertices desired runtime follow equivalently analysis determinant algorithm section decreasing proportionally number vertices remains bound distortion spanning tree distribution caused calls schursparse bounds distortion follow equivalently determinant algorithm also substitutes schursparse exact schur complements due dependencies recursive structure particular calls schursparse independent graphs called upon depend randomness line prolongatetree specifically simply resulting edge previously visited vertex partitions within recursion subgraph schursparse called upon additionally dependent vertex partitioning almostindependent key idea proof analysis distortion incurred schursparse layer probability sampling fixed tree considering alternate procedure consider exactly sampling random spanning tree layer along fact consideration restricted fixed tree allow algorithm approxtree take graph output tree randomly distribution distribution input graph error parameter initial number vertices output tree randomly generated distribution distribution via lemma almostindependent schursparse tracking whether edge via modifications lemma approxtree rand ori ori calculated using weights tracked line delete edges remaining vertices schursparse approxtree prolongatetree output separate randomness incurred calls schursparse sources randomness mentioned accordingly provide following definition definition truncated algorithm algorithm given modifying approxtree computations sparsified schur complements replaced exact calls schur complements aka level tree distribution defined output truncated algorithm note particular tree distribution produced exacttree distribution log distribution outputted approxtree primary motivation definition separate randomness calls schursparse level ultimately give following lemma prove end section lemma invocation approxtree graph variance bound layer begin consider differences probability sampling fixed tree recursive call crucial observation two recursive calls approxtree approxtree viewed independent claim call approxtree algorithm return one possible choice generated via lines proof note edges removed line precisely edges endpoints contained fixed set unique unique well allows analyze truncated algorithm splitting probabilities occur level specifically first level viewed pairs graphs along intended trees definition define probabilities returning pair trees belong pair graphs product probability running almostindependent partitioned edges contracted edges deleted probability mapped line probability mapped call prolongatetree line definition allows formalize splitting probabilities level importantly note instead call schursparse generate affect probability calls almostindependent prolongatetree depend consider drawn track edges original graph generated schur complement consequently difference distributions distortion drawing sparsified version handling sparsifiers schur complements simplified following observation claim output schursparse identical output idealsparsify set statistical leverage scores seen revisiting schur complement sparsification rejection sampling algorithms section show statement also extends approximate schur complements produced lines algorithm means let denote distribution produced idealsparsify respectively tree lemma exists collection graphs tree pairs probabilities given definition turn extend via induction multiple levels important note comparing distributions make calls idealsparsify level need additionally consider possible graphs generated sparsification level restrict corresponding exact graphs level definition use denote sequence graphs levels plus peripheral exact schur complements level along spanning trees generated peripheral graphs denote graphs trees exist different vertex sets use set pairs set vertices sequence graphs sequence trees peripherals use denote product probabilities vertex split resulting trees mapping back correctly defined definition times probabilities subsequent graphs generated sparsifiers ones denote peripheral graphs denote furthermore use denote one particular product distribution sparsifiers graphs sequence sparsifiers level also define probabilities trees picked sense def def applying lemma inductively allows extend multiple levels corollary exists collection graphs tree pairs tree reduces necessary proof bounding total variation distance examining difference recalling definition expectation inverse probability however critical note concentration bounds total trees probability inverted contained necessitates extending concentration bounds random graphs condition upon certain tree remaining graph done following lemma proven section recall set schursparse lemma let graph vertices edges estimates leverage scores sample count let denote distribution outputs idealsparsify fixed spanning let denote distribution formed conditioning graph containing due independence call idealsparsify apply concentration bounds across product use fact decreases proportionally vertex size algorithm associated sparsifier distribucorollary sequence peripheral graphs tion sequence trees defined definition proof independence calls idealsparsify definition def def applying lemma call idealsparsify set gives total error bounded exp gives bound follows form total size level recursion remains use concentration bounds inverse desired probability bound total variation distance done following lemma viewed extension lemma also proven section lemma let distribution universe elements associated random variable utilize lemma observe values distribution forms probability distribution tuples rescaled play role decoupling total variation distance per tree allows bound overall total variation corresponding terms pairs distance proof lemma definition total variation distance corollary triangle inequality upper bound probability crucially inner term scalar summation equivalent goal use lemma distribution tuples density equaling density distribution corresponding value values equaling note fact maps back tree imply distribution well rescaled version corollary gives required conditions lemma turn gives overall bound proof theorem running time follows way analysis determinant estimation algorithm proof theorem end section correctness total variation distance bound implied appropriately setting invoking bound lemma note factors log absorbed notation finally note simplicity analysis total variation distance account failure probability lemma account simply use fact log calls schursparse made hence probability call failing polynomially small absorbed total variation distance conditional concentration bounds section extend concentration bounds conditioning certain tree sampled graph specifically goal proving lemma edge splitting arguments similar section suffices analyze case edges leverage score lemma let graph vertices edges edges statistical leverage scores sample count let subgraph containing edges picked random without replacement let denote distribution subgraphs edges furthermore fixed spanning tree let denote distribution induced contain use denote graph note uniform leverage score requirement strict analysis lemma eventually aiming bound samples also means constant factor leverage score approximations suffices routine starting point proof observation uniform sampling term dependent proof follow showing concentration variable done similarly concentration done section primary difficulty extending proof come fact trees different probabilities sampled graph depending many edges share much dealt assumption makes exponential terms probabilities associated tree sampled graph negligible additionally assumption implies fixed tree expected number edges shares random tree close result trees intersect negligible contributions analysis follow similarly section note due larger sample count concentration bounds section also hold would fact slightly simpler prove edges sampled independently probability keep assumption sampling edges globally without replacement though order avoid changing algorithm analysis require much additional work section organized follows section give upper lower bounds expectation section give upper bound variance section combine bounds previous two sections prove lemma upper lower bounds conditional expectation order prove upper lower bounds first give several helpful definitions corollaries lemmas assist proof examination require approximations fixing edges drawing edges remaining edges edge probability sampled graph denote probability def often easier exchange def probability single edge picked without conditioning errors governed remark errors turn acceptable even raised power furthermore assumption implies expect randomly chosen tree intersect often implicitly show form geometric series bound immediately implied assumption lemma change sampling procedure alter formulation first want write terms values familiar losing small errors additionally many exponential terms previous analysis immediately absorbed approximation error assumption lemma let graph vertices edges value fix tree random subset edges containing probability edge picked sample proof given edges remaining edges chosen edges accordingly tree probability obtained dividing number subsets edges contain edges number subsets edges following proof lemma reduces exp reduce using assumption turn obtain via linearity expectation subdivide summation based amount edges intersection move term inside summation finally use equation replace also require strong lower bound expectation following lemma shows trees intersect restricting ourh consideration trees much easier work obtaining lower bound lemma let graph vertices edges edges statistical leverage scores tree proof definition classify trees intersection consider inner summation separating possible forest edges gives invoking lemma inner summation fact edges gives upper bound forests utilize upper bound achieve lower bound rearranging initial summation applying assumption lemma gives desired result necessary tools place give upper lower bounds expectation terms note also close approximation assumption lemma let graph vertices edges edges statistical leverage scores let fix tree random subset edges contain proof first prove upper bound lemma proof similar lemma gives moving outside summation substituting gives applying corollary upper bound summation gives lower bound first using lemma restrict trees intersect using lemma formally upper bound conditional variance bound variance upper bounding way similar lemma assumption means situation simpler exponential term negligible proof lemma often separate summations pairs trees based upon number edges intersection frequently invoke lemma however moving pieces summation due intersections lemma proven later section analogous lemma much involved lemma let graph vertices edges edges statistical leverage scores sample count tree let denote random subset edges proof analogous reasoning proof lemma pair trees consequence equation specifically bound upper bound obtain turn summing pairs trees note furthermore separate summation per usual possible size bring terms outside summation depend values order deal inner summation need separate based size note last term bounded lemma stated proven immediately incorporating resulting bound grouping terms summations respectively gives summation use crude upper bound plug lemma upper bounds summation giving remains prove following bound number pairs trees certain intersection size following lemma generalization lemma proven analogously using negative correlation edges spanning trees fact lemma lemma let graph edges vertices every edges leverage score tree integers proof first separate summation possible forests size could intersection first consider inner summation relax requirement needing note equivalent allows separate summation particular terms involving examine first term product second follow equivalently split summation possible forests size could intersect relax contain since disjoint restrict disjoint well relaxing requiring instead assumption disjoint means union must exactly edges apply lemma inner summation use fact sets achieve upper bound similarly also obtain along fact edge sets size gives desired bound concentration inverse probabilities complete proof lemma using concentration results number trees sampled graph conditioned upon certain tree contained graph proof lemma definition lemma give exp condition allows bound term hexp incorporating approximation lemma gives definition implies bounds expectation variance bound use identity definition reduces last inequality incorporating lemmas applying lemma using condition bound exp gives variance bound follows definition bounding total variation distance section first bound total variation distance drawing tree distribution uniformly sampling edges drawing tree distribution first bound based concentration number trees give time algorithm sampling spanning trees corollary next give general bound total variation distance two distributions based concentration inverse probabilities resulting lemma used proving bound total variation distance recursive algorithm given section however bound requires higher sample count direct derivation distances concentration bounds still necessary uses edge sparsifier corollary simple total variation distance bound concentration bounds give proof total variation distance bounded based concentration spanning trees sampled graph proof lemma substituting definition definition total variation distance gives substituting conditions definition prh given condition using fact prh distribute first term prh condition simplifies iff simplifies prh triangle inequality gives prh point rearrange summation obtain definition simplifies inequality distributions instantiated random variable function get teh total variation distance bound inverse probability concentration give proof lemma general bound total variation distance based upon concentration results inverse probabilities lemma let random variable entire support given var proof chebyshev inequality gives furthermore assume reduces inverting reversing inequalities gives using fact conclude implies proves lemma bound allow bound close value arbitrarily large bound bounds probability events handle treating case small separately account total probability cases via summations first show distributions truncated avoid small case variance bounded lemma let random variable parameters entire support var proof since decompose expected value buckets via log pry last term guarantee lemma gives intermediate probability terms bounded last one bounded gives total log complete proof via argument similar proof lemma section additional step definition bad represents portion random variable high deviation proof lemma define scaling factor corresponding probability def triangle inequality handle case close far separately requires defining portion values large variance def bad supp lemma gives bad outer distribution factoring value gives gives bad bad define fixed distributions peu distribution whose values set whenever badu lemma gives taken support written indicator variables bad combining lower bound mass complements bad sets equation via triangle inequality gives bad upon taking complement bad together equation gives bad combining two summations invoking triangle inequality start gives bound references arash asadpour michel goemans aleksander madry shayan oveis gharan amin saberi log log log algorithm asymmetric traveling salesman problem proceedings annual acmsiam symposium discrete algorithms soda pages philadelphia usa society industrial applied mathematics stephen alstrup jacob holm kristian lichtenberg mikkel thorup maintaining information fully dynamic trees top trees acm transactions algorithms talg david aldous random walk construction uniform spanning trees uniform labelled trees siam journal discrete mathematics pages christos boutsidis petros drineas prabhanjan kambadur anastasios zouzias randomized algorithm approximating log determinant symmetric positive definite matrix corr available http christos boutsidis petros drineas prabhanjan kambadur anastasios zouzias randomized algorithm approximating log determinant symmetric positive definite matrix corr david karger approximating minimum cuts time proceedings annual acm symposium theory computing stoc pages new york usa acm robert burton robin pemantle local characteristics entropy limit theorems spanning trees domino tilings via annals probability pages andrei broder generating random spanning trees proceedings annual symposium foundations computer science focs pages walter baur volker strassen complexity partial derivatives theoretical computer science dehua cheng cheng yan liu richard peng teng efficient sampling gaussian graphical models via spectral sparsification proceedings conference learning theory pages available http charles colbourn robert day louis nel unranking ranking spanning trees graph journal algorithms michael cohen jonathan kelner john peebles richard peng anup rao aaron sidford adrian vladu algorithms markov chains new spectral primitives directed graphs accepted stoc preprint available https charles colbourn wendy myrvold eugene neufeld two algorithms unranking arborescences journal algorithms michael cohen nearly tight oblivious subspace embeddings trace inequalities proceedings annual symposium discrete algorithms pages siam michael cohen richard peng row sampling lewis weights proceedings annual acm symposium theory computing stoc pages new york usa acm available http david durfee rasmus kyng john peebles anup rao sushant sachdeva sampling random spanning trees faster matrix multiplication corr david eppstein zvi galil giuseppe italiano amnon nissenzweig sparsification mdash technique speeding dynamic graph algorithms acm september wai shing fung ramesh hariharan nicholas harvey debmalya panigrahi general framework graph sparsification proceedings fortythird annual acm symposium theory computing pages acm https navin goyal luis rademacher santosh vempala expanders via random spanning trees proceedings twentieth annual symposium discrete algorithms soda pages philadelphia usa society industrial applied mathematics alain guenoche random spanning tree journal algorithms timothy hunter ahmed alaoui alexandre bayen computing symmetric diagonally dominant matrices time corr available http timothy hunter ahmed alaoui alexandre bayen computing symmetric diagonally dominant matrices time corr roger horn charles johnson matrix analysis cambridge university press insu han dmitry malioutov jinwoo shin computation stochastic chebyshev expansions icml pages available https nicholas harvey keyulu generating random spanning trees via fast matrix multiplication latin theoretical informatics volume pages ilse ipsen dean lee determinant approximations svante janson numbers spanning trees hamilton cycles perfect matchings random graph combinatorics probability computing gorav jindal pavel kolev richard peng saurabh sawlani density independent algorithms sparsifying random walks corr gustav kirchhoff ber die der gliechungen auf welche man bei der untersuchung der linearen vertheilung galvanischer wird poggendorgs ann phys pages rasmus kyng yin tat lee richard peng sushant sachdeva daniel spielman sparsified cholesky multigrid solvers connection laplacians proceedings annual acm sigact symposium theory computing pages acm available http jonathan kelner aleksander madry faster generation random spanning trees proceedings annual symposium foundations computer science focs pages available https vidyadhar kulkarni generating random combinatorial objects journal algorithms aleksander madry damian straszak jakub tarnawski fast generation random spanning trees effective resistance metric proceedings annual symposium discrete algorithms soda pages available http richard peng daniel spielman efficient parallel solver sdd linear systems proceedings annual acm symposium theory computing stoc pages new york usa acm available http daniel spielman nikhil srivastava graph sparsification effective resistances siam journal computing daniel dominic sleator robert endre tarjan binary search trees journal acm jacm daniel spielman teng spectral sparsification graphs siam july daniel spielman teng nearly linear time algorithms preconditioning solving symmetric diagonally dominant linear systems siam journal matrix analysis applications available http joel tropp tail bounds sums random matrices found comput august available http vishnoi laplacian solvers algorithmic applications virginia vassilevska williams multiplying matrices faster proceedings annual acm symposium theory computing stoc pages new york usa acm available https deferred proofs provide detailed proofs combinatorial facts random subsets edges discussed briefly section proof lemma probability obtained dividing number subsets edges contain edges number subsets edges using gives two terms simplified rule furthermore exp use taylor expansion exp obtain exp substituting gives exp exp absorbed assumed proof lemma prh exp invoking identity gives prh using algebraic identity exp dropping trailing negative lower order term gives prh exp upon pull term exponential get term depends grouping term together exp term using fact exp gives result proof lemma first separate summation terms possible forests size pair trees could intersect consider inner summation number pairs trees particular set size upper bounded square number trees containing allow directly incorporate bounds lemma turn assumption obtain bound furthermore number possible subsets bounded even crudely incorporating gives bounded
8
computing geodesic diameter center polygonal sang matias joseph yoshio valentin haitao nov kyonggi university suwon south korea swbae tohoku university sendai japan mati stony brook university new york usa jsbm university tokyo japan okamotoy university sweden utah state university utah usa abstract polygonal domain holes total vertices present algorithms compute geodesic diameter time geodesic center time respectively denotes inverse ackermann function algorithms known problems euclidean counterpart best algorithms compute geodesic diameter log time compute geodesic center log time therefore algorithms significantly faster algorithms euclidean problems algorithms based several interesting observations shortest paths polygonal domains keywords phrases geodesic diameter geodesic center shortest paths polygonal domains metric introduction polygonal domain closed connected polygonal region plane holes simple polygons let total number vertices regarding boundary obstacles consider shortest paths lying two points geodesic distance length shortest path geodesic diameter simply diameter maximum geodesic distance pairs points closely related diameter quantity point minimizes called geodesic center simply center quantities called euclidean depending euclidean metric adopted measure length paths simple polygons euclidean geodesic diameter center studied since diameter chazelle gave first algorithm followed log algorithm suri finally hershberger suri gave algorithm computing diameter center log algorithm asano toussaint pollack sharir rote gave log time algorithm computing geodesic center recently ahn solved problem time preliminary version paper appeared proceedings international symposium theoretical aspects computer science stacs bae korman mitchell okamoto polishchuk wang licensed creative commons license leibniz international proceedings informatics schloss dagstuhl informatik dagstuhl publishing germany geodesic diameter center general case problems difficult euclidean diameter problem solved log time euclidean center problem first solved time improved log time algorithm given versions geodesic diameter center simple polygons computed linear time unaware previous algorithms polygonal domains paper present first algorithms compute geodesic diameter center polygonal domain defined time respectively inverse ackermann function comparing algorithms problems euclidean metric algorithms much efficient especially significantly smaller discussed main difficulty polygonal domains seemingly arises fact several topologically different shortest paths two points case simple polygons bae korman okamoto observed euclidean diameter realized two interior points polygonal domain case two points least five distinct shortest paths difficulty makes algorithm suffer fairly large running time similar issues also arise metric diameter may also realized two interior points seen extending examples take different approach first construct cell decomposition geodesic distance function restricted pair two cells explicitly described complexity consequently diameter center obtained exploring pieces geodesic distance leads simple algorithms compute diameter time center time help extended corridor structure reduce complexity decomposition another coarser decomposition complexity another crucial observation lemma one may compute diameter time using techniques time algorithm one main contributions additional series observations lemmas allow reduce running time observations along decomposition may applications well idea computing center similar motivated study versions diameter center problems polygonal even domains several reasons first metric natural well studied optimization routing problems models actual costs rectilinear road networks certain applications indeed diameter center problems simpler setting simply connected domains studied second metric approximates euclidean metric improved understanding algorithmic results one metric assist understanding metrics continuous dijkstra methods shortest paths directly led improved results euclidean shortest paths preliminaries subset denote boundary denote line segment endpoints length defined respectively respectively polygonal path let length sum lengths segments following path always refers polygonal path path monotone short every vertical bae korman mitchell okamoto polishchuk wang horizontal line intersects one connected component following basic observation length paths used discussion fact monotone path two points holds view boundary polygonal domain series obstacles path allowed cross throughout paper unless otherwise stated shortest path always refers shortest path path always refers always refers geodesic simplicity discussion make general position assumption two vertices following also exploited basic fact discussion fact simple polygon unique euclidean shortest path two points path also shortest path rest paper organized follows section introduce cell decomposition exploit preliminary algorithms computing diameter center algorithms improved later section based extended corridor structure new observations discussed section one may consider preliminary algorithms section relatively straightforward present following reasons first provide overview problem structure second help reader understand sophisticated algorithms given section third parts also needed algorithms section cell decomposition preliminary algorithms section introduce cell decomposition exploit preliminary algorithms compute diameter center first build horizontal trapezoidal map extending horizontal line vertex end line hits next compute vertical trapezoidal map extending vertical line vertex ends extended lines overlay two trapezoidal maps resulting cell decomposition see fig extended horizontal vertical line segments called diagonals note diagonals cells cell bounded two four diagonals one edge thus appears trapezoid triangle let set vertices incident note abuse notation let also denote set cells decomposition cell intersection trapezoid horizontal trapezoidal map another one vertical trapezoidal map two cells aligned contained trapezoid horizontal vertical trapezoidal map unaligned otherwise lemma crucial computing diameter center lemma let two cells point point aligned otherwise exists shortest path passes two vertices see fig proof two cells aligned contained trapezoid vertical horizontal trapezoidal map since convex two points joined straight segment suppose unaligned see fig let shortest path first observe intersects one horizontal diagonal one geodesic diameter center figure cell decomposition figure illustrating lemma shortest path shortest path vertices vertical diagonal bound see fig highlighted red color otherwise must aligned since bounding intersection vertex see fig let first intersection along similarly define first intersection since horizontal vertical union two line segments shortest path replace portion another path since monotone length equal fact implies shortest path passes vertex symmetrically argument applied side destination cell implies modified shortest path passes simultaneously vertex lemma thus follows computing geodesic diameter section present time algorithm computing diameter general idea consider every pair cells separately pair cells compute maximum geodesic distance called diameter since decomposition diameter equal maximum value constrained diameters pairs cells handle two cases depending whether aligned aligned lemma distance since distance function convex diameter always realized pair two vertices thus done checking pairs vertices time following assume unaligned consider point point vertex vertex consider path obtained concatenating shortest path let length lemma ensures since constant function linear thus easy compute diameter know value every pair vertices bae korman mitchell okamoto polishchuk wang lemma two cells diameter computed constant time provided every pair computed proof case aligned easy discussed thus assume unaligned assume know value every pair recall note since linear function domain graph appears hyperplane space thus geodesic distance function restricted corresponds lower envelope hyperplanes since constant number pairs function also explicitly constructed time finally find highest point graph traversing faces vertex straightforward method compute vertices log time first computing shortest path map spm log time computing log time instead give faster sweeping algorithm lemma making use property vertices every diagonal sorted lemma vertex evaluate vertices time proof algorithm attains efficiency using property vertices diagonal sorted specifically suppose represented standard data structure doubly connected edge list traversing diagonal either vertical horizontal obtain vertically horizontally sorted list vertices diagonal first compute shortest path map spm log time apply standard sweeping technique say sweep spm vertical line left right events sweep line hits vertices spm obstacle vertices vertical diagonals note vertex either vertical diagonal obstacle vertex use standard technique handle events vertices spm event costs log time event vertical diagonal simply linear search sweeping status find cells spm contain cell vertices diagonal event takes time since diagonal vertices note total number events hence running time sweeping algorithm thus preprocessing two cells diameter computed time lemma since cells suffices handle pairs cells resulting candidates diameter maximum diameter hence obtain following result theorem geodesic diameter computed time computing geodesic center present algorithm computes center observation lemma plays important role algorithm geodesic diameter center point define maximum geodesic distance point center defined point approach based decomposition cell want find point minimizes maximum geodesic distance call point center thus center clearly center must center algorithm thus finds center every last results candidates center consider cell compute center investigate function restricted exploit lemma utilize lemma point define upper envelope domain algorithm explicitly computes functions computes upper envelope graphs center corresponds lowest point observe following function lemma function piecewise linear complexity proof recall proof regard restricted use coordinate system introducing axes thus may write max graph function consists linear patches shown proof lemma fully identify geodesic distance function consider graph hypersurface space additional axis project graph onto precisely projection set thus determined highest point intersection projection parallel line point implies function simply corresponds upper envelope projection since consists linear patches upper envelope projection concludes proof ready describe compute center first handle every cell compute graph thus gather linear patches let family linear patches compute upper envelope find lowest point upper envelope corresponds center since lemma upper envelope computed time executing algorithm edelsbrunner denotes inverse ackermann function following theorem summarizes algorithm theorem geodesic center computed time proof preprocessing compute geodesic distances pairs vertices time show fixed center computed time discussed proof lemma geodesic distance function restricted along graph specified time lemma describe function time last task compute upper envelope time discussed executing algorithm edelsbrunner bae korman mitchell okamoto polishchuk wang figure triangulation graph obtained dual graph whose nodes edges depicted black dots red solid curves junction triangle corresponds node removing junction triangles results three corridors figure corresponds edge graph figure hourglasses corridors dashed segments diagonals junction triangles open five bays seen bay gate shown shaded region closed three bays canal shaded region depicts canal two gates exploiting extended corridor structure section briefly review extended corridor structure present new observations crucial improved algorithms section corridor structure used solving shortest path problems later new concepts bays canals ocean introduced referred extended corridor extended corridor structure let denote arbitrary triangulation see figure obtain log time time let denote dual graph node corresponds triangle edge connects two nodes corresponding two triangles sharing diagonal based one obtain planar graph possibly loops repeatedly removing nodes contracting nodes resulting graph faces nodes edges node graph corresponds triangle called junction triangle removal junction triangles results components called corridors corresponds edge graph see figure refer details next briefly review concepts bays canals ocean refer details let holes outer polygon simplicity hole may also refer unbounded region outside hereafter boundary corridor consists two diagonals two paths along boundary holes respectively possible hole case one may consider two paths respectively let endpoints two paths respectively diagonals bounds junction triangle see figure let denote euclidean shortest path inside region bounded called hourglass either open closed otherwise open convex chains called sides otherwise consists two funnels path joining two apices geodesic diameter center two funnels called corridor path two funnel apices figure connected called corridor path terminals note funnel comprises two convex chains consider region minus interior consists number simple polygons facing sharing edge one call simple polygons bay facing single hole canal facing holes bay bounded portion boundary hole segment two obstacle vertices consecutive along side call segment gate bay see figure hand exists unique canal corridor closed two holes bound canal canal case completely contains corridor path canal two gates two segments facing two funnels respectively corridor path terminals vertices funnels see figure note bay canal simple polygon let union junction triangles open hourglasses funnels call ocean boundary consists convex vertices reflex chains side open hourglass funnel note consists bays canals convenience discussion define way contain gates hence gates contained therefore point either triangulation obtained computing ocean bays canals done time roughly speaking reason partition ocean bays canals facilitate evaluating distance two points example use similar method section evaluate however challenging case happens one bay canal following lemma one key observations improved algorithms section essentially tells point bay canal farthest point achieved boundary similar spirit simple polygon case lemma let point bay canal exists equivalently proof recall gates contained ocean let closure consists gates let geodesic distance since simple polygon fact implies unique euclidean shortest path general depending whether bay canal proof consider two cases first prove basic property follows basic property consider point point claim exists shortest path following property crosses gate component unique euclidean shortest path points bae korman mitchell okamoto polishchuk wang figure illustration proof lemma bay consider shortest path crosses gate least twice let first last points encounter walking along replace portion line segment fact obtain another shortest path crosses one repeat procedure gates shortest path crossing gate take connected component shortest path two endpoints inside implies replace component repeat components obtain another shortest path desired property bay case prove lemma first prove case bay unique gate recall gate contained depending whether two cases suppose let point claim exists shortest path property since lie cross thus contained moreover property hence lies lemma trivially holds suppose lies interior extend last segment hits point boundary see figure since simple polygon extended path indeed argument strictly larger suppose shortest path point must cross gate implies show exists point holds implies purpose consider union union forms funnel plus euclidean shortest path apex extend last segment point boundary similarly previous case thus otherwise let two vertices adjacent apex see figure observe segment separates gate hence path crosses claim exists ray moves along fixed nondecreasing claim true select lemma follows since holds geodesic diameter center figure illustration proof lemma canal two gates next prove claim let disk centered radius see figure since lies boundary move along ray direction outwards decreasing also implies let set rays moves along decreasing goal thus show pick ray intersection purpose consider four since disks set depends quadrant belongs precisely common quadrant set stays constant example lying quadrant commonly set rays direction inclusively since lies edge case thus directions rays span angle exactly moreover line segment thus intersects three quadrants therefore equal intersection three different sets rays whose directions span angle figure illustrates example scene intersects three quadrants centered implies canal case proved lemma bay case bay next turn canal case suppose canal let two gates two corridor path terminals see figure extend interior direction opposite note due definition canals extension always goes interior refer detailed discussion let first point hit extension line segment partitions two simple polygons one containing denoted consider edge convenience discussion assume contain segment define analogously gate view bay gate apply identical argument done bay case concluding exists done otherwise according analysis bay case case move along since max done case analogous bae korman mitchell okamoto polishchuk wang observe suppose let passes corridor path terminal since symmetrically passes consider shortest path property classify one following three types lies inside walking along last gate crossed note falls one three cases case indeed case consists shortest path thus symmetrically case depending whether handle two possibilities following assume property discuss shortest path suppose shortest path must cross gate means shortest path type min three regions consider decomposition two decomposition clearly geodesic voronoi diagram simple polygon sites additive weights see aronov papadopoulou lee also region called bisector property voronoi let diagrams path connecting two points intersection moreover move along one direction nondecreasing thus attained endpoint lemma trivial let endpoint direction away property bisector hence lies boundary say case conclude lemma otherwise may lie apply analysis bay case find point lies interior extend last segment hits point note lies case apply argument find point case lies interior handled analogously finally suppose consider geodesic voronoi diagram three sites additive weights respectively done observe maximum value attained voronoi vertex former case analyzed prove latter case happen note three shortest paths different types voronoi vertex exactly two shortest paths types point bisector following show bisector appear voronoi diagram implies voronoi diagram vertex suppose contrary bisector appears nonempty voronoi edge voronoi diagram point let two shortest paths type respectively thus passes passes property crosses first walking along reach geodesic diameter center figure illustration proof lemma figure illustrating two shortest paths red canal dotted blue solid intersecting canal two gates crosses first let last point respectively walk along claim intersect point indeed consider loop formed subpath segment subpath see figure also let disk centered arbitrarily small radius point lying loop separate subpath must intersect point see fig thus claim follows otherwise subpath must cross point thus claim also follows let see fig subpath respectively property shortest paths hence replacing results another shortest path lies inside otherwise crosses twice thus replacing subpath results another shortest path inside either way another shortest path type hence contradiction assumption lies bisector finishes proof lemma shortest paths ocean discuss shortest paths ocean recall corridor paths contained canals terminals using corridor paths finding euclidean shortest path two points reduced convex case since consists convex chains example suppose must shortest path lies union corridor paths consider two points shortest path shortest path possibly contains corridor paths intuitively one may view corridor paths shortcuts among components space since consists convex vertices reflex chains complementary region refers union holes partitioned set convex objects total vertices extending segment inward convex vertex view objects obstacles shortest path avoiding obstacles possibly containing corridor paths note bae korman mitchell okamoto polishchuk wang core figure illustrating core convex figure illustrating cell decomposiobstacle red squared vertices corridor tion red squared vertices core vertices path terminals green cell boundary cell algorithms work directly without using ease exposition discuss algorithm help convex obstacle four extreme vertices topmost bottommost leftmost rightmost vertices may corridor path terminals boundary connect extreme vertices corridor path terminals consecutively line segments obtain another polygon denoted core called core see figure let pcore denote complement union cores core corridor paths note number vertices pcore pcore pcore let dcore geodesic distance pcore core structure leads efficient way find shortest path two points chen wang proved shortest path pcore locally modified shortest path without increasing length lemma two points dcore holds hence compute two points sufficient consider cores corridor paths pcore thus reduce problem size let spmcore shortest path map source point spmcore complexity computed log time decomposition ocean introduce cell decomposition ocean see figure order fully exploit advantage core structure designing algorithms computing geodesic diameter center vertices core called core vertices construction analogous previous cell decomposition first extend horizontal line core vertex hits horizontal diagonal extend vertical line core vertex endpoint horizontal diagonal resulting cell decomposition induced diagonals hence constructed respect core vertices note consists cells built log time typical plane sweep algorithm call cell boundary cell boundary cell portion appears convex chain construction core since may contain multiple vertices complexity may constant cell rectangle bounded four diagonals geodesic diameter center vertex either endpoint diagonal intersection two diagonals thus number vertices prove analogue lemma decomposition let set vertices incident note define alignedness relation two cells analogously observe analogy lemma lemma let two cells aligned otherwise exists shortest path containing two vertices proof first discuss case aligned case bounded two consecutive parallel diagonals let region two diagonals since consists two monotone concave chains two diagonals construction difficult see joined monotone path inside implies fact next consider unaligned case suppose unaligned lemma exists shortest path lies inside union corridor paths proof case analogous lemma since unaligned two possibilities walk along either meet horizontal diagonal vertical diagonal bound enter corridor path via terminal former case apply argument done proof lemma show modified pass vertex without increasing length resulting path latter case observe construction also vertex diagonal extended done since discussed cell aligned otherwise unique cell aligned common diagonal bounding case since passes indeed intersects two diagonals means former case improved algorithms section explore geometric structures give observations decomposition results together results section help give improved algorithms compute diameter center using similar algorithmic framework section geodesic distance functions recall preliminary algorithms section rely nice behavior geodesic distance function specifically restricted two cells lower envelope linear functions two different cell decompositions observe analogues lemmas two cells extending alignedness relation cells follows consider geodesic distance function restricted two cells call cell oceanic coastal otherwise coastal case well understood discussed section otherwise two cases case oceanic case one oceanic discuss two cases bae korman mitchell okamoto polishchuk wang case extend alignedness relation oceanic cells end alignedness already defined two oceanic cells define alignedness relation following way contained cell aligned say aligned however may contained cell endpoints horizontal diagonals gates vertices endpoints create vertical diagonals resolve issue augment adding vertical diagonals specifically vertical diagonal diagonal contains add extend vertically hits boundary way add vertical diagonals size still results obtained still applicable new little abuse notation still use denote new version two oceanic cells must unique cell contains defined aligned aligned lemmas naturally extended follows along extended alignedness relation lemma let two oceanic cells holds aligned otherwise exists shortest path passes vertex vertex proof belong lemmas applied suppose aligned contained aligned definition hence lemma turn case focus bay canal since gates need somehow incorporate influence gates decomposition end add additional diagonals follows extend horizontal line endpoint gate hits extend vertical line endpoint gate endpoint horizontal diagonals added let denote resulting decomposition note cells partitioned cells combinatorial complexity still gate let region points joined point vertical horizontal line segment inside since endpoints also obstacle vertices boundary formed four diagonals hence cell either completely contained cell former case said following let coastal cell intersects oceanic cell depending whether gate three cases cells lemma handles first case lemma deals special case latter two cases lemma second case lemma third case lemma proving lemma proof lemma summarizes entire algorithm three cases lemma suppose gate proof suffices observe joined rectilinear path whose length equal distance fact geodesic diameter center consider path assume directed gate call last gate crossed path shortest path length smallest among paths suppose shortest path since may intersect may avoid intersect bay either avoids shortest path gate otherwise canal either avoids shortest path two gates following lemma lemma suppose gate least one exists shortest path avoids shortest path passes vertex another vertex focus shortest paths according lemma suppose gate shortest paths avoid exists shortest path containing vertex vertex proof bay since shortest paths avoid must contained thus must exist paths canal although may crossed gate also exist paths specifically paths otherwise also paths cross gates let shortest path since crosses horizontal vertical diagonals define escape reach implies intersects horizontal vertical diagonals defining thus modified pass vertex done proof lemma opposite end since apply argument symmetrically modify pass vertex thus lemma follows remaining case recall coastal intersects oceanic implying intersect lemma let gate suppose exists unique vertex concatenation segment svg shortest path inside results shortest path proof let since intersect gate therefore bay must contained thus canal may intersect gate union forms simple polygon thus simple polygon apply fact let unique euclidean shortest path consider union points suppose directed forms hourglass distinguish two possibilities either open closed assume open see figure exist thus without loss generality assume lies right observe first bae korman mitchell okamoto polishchuk wang figure illustration proof lemma open closed segment also since cell implies shortest path contained crosses pair vertical horizontal diagonals define precisely crosses left vertical lower horizontal diagonals letting vertex defined two diagonals apply argument proof lemma modify pass assume closed see figure two funnels let one contains let apex every euclidean shortest path passes note obstacle vertex thus cell aligned without loss generality assume lies right observe euclidean shortest path monotone since apex since lies left monotone modify path pass vertex previous case let vertex described lemma found efficiently shown proof lemma consider union euclidean shortest paths inside points since simple polygon union forms funnel base plus euclidean shortest path apex recall fact euclidean shortest path inside simple polygon also shortest path let set horizontally vertically extreme points convex chain gathers leftmost rightmost uppermost lowermost points chain note includes endpoints apex observe following lemma lemma suppose exists shortest path passes moreover length path proof since simple polygon euclidean shortest path also shortest path fact thus length shortest path point funnel equal length unique euclidean shortest path contained lemma assumption among paths cross gate exists shortest path consisting three portions svg unique euclidean shortest path vertex convex chain let last one among encounter walk along consider segment may cross geodesic diameter center done replacing subpath otherwise crosses two points since includes extreme points chain subchain hence replace subpath monotone path consists convex path along length monotone path equal fact consequently resulting path also shortest path desired property cell let combinatorial complexity boundary cell may bounded constant otherwise trapezoid triangle thus geodesic distance function defined two cells explicitly computed time preprocessing shown lemma lemma let cell preprocessing function cell explicitly computed time provided computed moreover lower envelope linear functions proof oceanic lemma implies aligned hand coastal cells lemma implies conclusion since either case geodesic distance lower envelope linear functions hence provided values pairs known envelope computed time proportional complexity domain suppose coastal oceanic cell intersects bay canal also cell lemma implies lemma discussed section thus assume cell add diagonals extended endpoint gate obtain specify cells gate time following let oceanic cell note cell partitioned cells two cases depending whether bay canal first suppose bay let unique gate case shortest path provided intersects since unique two subcases depending whether lemmas otherwise thus lemma follows identical argument suppose since unique gate case need find vertex purpose compute four euclidean shortest path maps spma inside time fact spma also shortest path map specify geodesic distance points results piecewise linear function test whether holds lemma exists vertex test passed vertex since shortest path map spma complexity effort find bounded next compute funnel extreme vertices done exploring spma time bae korman mitchell okamoto polishchuk wang apply lemma obtain thus lower envelope linear functions otherwise dvg lemma since lower envelope constant number linear functions thus case conclude bay case suppose canal two gates falls one three case neither iii preprocessing compute done bay case analogously compute note shortest path either provided intersects thus chooses minimum among shortest path shortest path shortest path avoiding possible consider three cases suppose case either lemma otherwise neither apply lemmas hence lemma follows suppose neither lemma length shortest path equal dvg length shortest path equal geodesic distance minimum two quantities thus lower envelope linear function lemmas min min dvg min case analogous neither lemma suppose case handled symmetrically lemma neither lemmas remaining case case length shortest path equal dvg lemma gate length shortest path equal lemmas thus geodesic distance smaller two quantities consequently verified every case last step proof observe sufficient handle separately cells whose union forms original cell since cell decomposed cells computing geodesic diameter center lemma assures ignore coastal cells completely contained interior bay canal order find farthest point suggests combined set cells two different decompositions let set cells either belongs coastal cell note consists oceanic cells coastal cells since geodesic diameter center boundary bay canal covered cells lemma implies following lemma lemma point apply approach section use instead compute geodesic diameter compute diameter pair cells suppose know value algorithm handles pair cells according types applying lemma following lemma computes cell vertices lemma time one compute geodesic distances every pairs two cells proof let set vertices oceanic cells set vertices coastal cells note handle pairs vertices separately three cases iii let vertex compute shortest path map spmcore core domain pcore discussed section recall spmcore complexity computed log time triangulated point geodesic distance dcore determined constant time locating region spmcore contains lemma dcore computing done time running plane sweep algorithm lemma spmcore thus spend time since compute pairs vertices time case also handled similar fashion let also show computing done time lies ocean apply argument case thus assume purpose consider point hole hole obstacle consisting one point polygonal domain obtain new domain compute corresponding corridor structures done time time triangulation given discussed section let denote ocean corresponding new polygonal domain since lies bay canal subset definition bays canals ocean thus compute core structure shortest path map spmcore log time analogously complexity spmcore bounded finally perform plane sweep algorithm spmcore case get values time remains case iii fix vertices either lie interior case assume triangulation discussed section recall compute shortest path map spm log time since spm stores obstacle vertices computing done time bound adding obstacle vertices thus case one lies handled log time following focus compute let edges arbitrary order modify original polygonal domain obtain rectified polygonal domain prect follows define bae korman mitchell okamoto polishchuk wang figure illustration construct prect bay gate decomposition inside dark gray region depicts hole bounding coastal cells intersecting shaded light gray color black dots vertices one labeled triangle highlighted rectified polygonal domain prect obtained expanding hole boundary prect depicted solid segments edges ordered order set vertices coastal cell shoot two rays vertical horizontal towards hits let triangle formed two points hit rays since vertex cell facing construction two rays must hit thus triangle well defined expand hole triangles let prect resulting polygonal domain every triangle regarded obstacle prect also add prect obstacle vertices see figure observe prect subset subsets lie boundary prect obstacle vertices two points prect let drect geodesic distance prect claim following prect holds drect suppose claim true construction prect done time compute shortest path map spmrect rectified domain prect obtain since prect holes vertices construction prect task done log time last case iii processed total log time prove claim follows proof claim triangle well defined call triangle maximal note two maximal triangles may share portion sides indeed prect union maximal triangles pick connected component prect set either maximal triangle union two maximal triangles share portion sides construction prect either case observe portion monotone path consider prect shortest path lies inside prect done since length path inside prect least otherwise may cross number connected components pick connected component crossed let first last points encounter walking along since monotone observed geodesic diameter center path along boundary also monotone fact thus replace subpath without increasing length resulting path thus length equal avoids interior repeat procedure connected components crossed last final path length equal avoids interior prect path prect since drect general shortest path prect hence drect proves claim consequently total time complexity bounded log lemma thus follows algorithms computing diameter center summarized proof following theorem theorem geodesic diameter center computed time respectively proof first discuss diameter algorithm whose correctness follows directly lemma execution procedure lemma preprocessing algorithm considers three cases two cells oceanic coastal iii coastal oceanic either case apply lemma case oceanic cells total complexity thus total time case bounded case coastal cells total complexity since trapezoidal thus total time case bounded case iii fix coastal cell iterate oceanic cells preprocessing done proof lemma take time since thus total time case iii bounded next discuss algorithm computing geodesic center consider cells compute centers preprocessing spend time compute geodesic distances pairs vertices lemma fix cell compute geodesic distance function restricted applying lemma section compute graph projecting graph take upper envelope graphs lemma analogue lemma thus center computed time denotes total complexity lemma implies time complexity note since cell either triangle trapezoid complexity thus lemma computing center takes time preprocessing lemma iterating takes time bae korman mitchell okamoto polishchuk wang conclusions gave efficient algorithms computing geodesic diameter center polygonal domain particular exploited extended corridor structure make running times depend number holes domain may much smaller number vertices would interesting find improvements algorithms hopes reducing running times would also interesting prove lower bounds time complexities problems acknowledgements work bae supported basic science research program national research foundation korea nrf funded ministry science ict future planning ministry education korman partially supported scientific research grant numbers mitchell acknowledges support binational science foundation grant national science foundation okamoto partially supported jst crest foundation innovative algorithms big data scientific research grant numbers polishchuk supported part grant sweden innovation agency vinnova project swedish transport administration trafikverket wang supported part national science foundation references ahn barba bose carufel korman algorithm geodesic center simple polygon proc symposium computational geometry pages aronov geodesic voronoi diagram point sites simple polygon algorithmica asano toussaint computing geodesic center simple polygon technical report mcgill university montreal canada bae korman okamoto geodesic diameter polygonal domains discrete computational geometry bae korman okamoto computing geodesic centers polygonal domain proc canadian conference computational geometry journal version appear computational geometry theory applications http bae korman okamoto wang computing geodesic diameter center simple polygon linear time computational geometry theory applications chazelle triangulating disjoint jordan chains international journal computational geometry applications chazelle theorem polygon cutting applications proc annual symposium foundations computer science pages chen wang nearly optimal algorithm finding shortest paths among polygonal obstacles plane proc european symposium algorithms pages chen wang computing visibility polygon island polygonal domain proc international colloquium automata languages programming pages journal version published online algorithmica geodesic diameter center chen wang shortest path queries among polygonal obstacles plane proc symposium theoretical aspects computer science pages edelsbrunner guibas sharir upper envelope piecewise linear functions algorithms applications discrete computational geometry guibas hershberger leven sharir tarjan algorithms visibility shortest path problems inside triangulated simple polygons algorithmica hershberger snoeyink computing minimum length paths given homotopy class computational geometry theory applications hershberger suri matrix searching metric siam journal computing inkulu kapoor planar rectilinear shortest path computation using corridors computational geometry theory applications kapoor maheshwari mitchell efficient algorithm euclidean shortest paths among polygonal obstacles plane discrete computational geometry mitchell optimal algorithm shortest rectilinear paths among obstacles canadian conference computational geometry mitchell shortest paths among polygonal obstacles plane algorithmica papadopoulou lee new approach geodesic voronoi diagram points simple polygon restricted polygonal domains algorithmica pollack sharir rote computing geodesic center simple polygon discrete computational geometry schuierer computing center simple rectilinear polygon proc international conference computing information pages suri computing geodesic furthest neighbors simple polygons journal computer system sciences wang geodesic centers polygonal domains proc european symposium algorithms pages
8
mmse precoder massive mimo using quantization ovais bin usman hela jedda amine mezghani josef nossek jun institute circuit theory signal processing technische munich germany email abstract propose novel linear mmse precoder design downlink massive mimo scenario economical computational efficiency reasons low resolution dac adc converters used comes cost performance gain recovered large number antennas deployed base station appropiate precoder design mitigate distortions due coarse quantization proposed precoder takes quantization account split digital precoder analog precoder formulate precoding problem mse users minimized constraint simulations compare new optimized precoding scheme previously proposed linear precoders terms uncoded bit error ratio ber index massive mimo precoding quantization transmit signal processing introduction massive mimo system named antenna system seen promising technology next generation wireless communication systems huge increase number antennas improve spectral efficiency energy efficiency reliability large number antennas say antennas simultaneously serves much smaller number users knowledge csi csit large spatial dof massive mimo systems exploited significantly increase spatial gain using precoding linear precoders regularized rzf scheme shown thus practical use linear precoding techniques massive mimo systems therefore mainly focus linear precoding techniques work price pay massive mimo systems increased complexity hardware number radio frequency chains signal processing resulting increased energy consumption transmitter several approaches considered literature decrease power consumption spatial modulation load modulation use parasitic antennas use transceivers one attractive solution overcome issues high complexity high energy consumption associated massive mimo use low resolution adcs dacs power consumption adc dac one devices reduced exponentially decreasing resolution quantization drastically simplify amplifiers mixers therefore design mmse linear precoder massive mimo scenario resolution dacs adcs restricted bit precoder design aims mitigating distortions due coarse quantization addition interference iui similar work presented authors optimize first quantizer levels give expression mmse precoder takes account quantizer however quantization transmitter considered contribution optimize quantizer quantizer work constant levels introduce second precoding stage analog domain quantizer minimize distortions due complex gaussian channels proposed precoder designed based iterative methods assume perfect csit study new precoder scheme improving ber compared precoder introduced paper organized follows section system model presented section derivations related quantization introduced section formulate optimization problem show derivations corresponding solution sections interpret simulation results summarize work notation bold letters indicate vectors matrices nonbold letters express scalars operators stand complex conjugation transposition hermitian transposition expectation respectively identity matrix denoted zeros ones matrix rows columns defined signal vector get fig system model define sign sign additionally diag denotes diagonal matrix containing diagonal elements denote standard deviation correlation coefficient respectively circular distributed gaussian signal system model consider massive mimo downlink scenario depicted fig antennas serves users signal vector contains data symbols users represents set qpsk constellation assume system quantization transmitter well receiver deployed therefore order mitigate iui make use precoder consisting digital precoder analog precoder use quantizer transmitter delivers signal belongs set means magnitude entry constant phase belongs result antennas end getting power recover information loss power allocation due employ analog precoder diagonal structure end yqd received decoded signal vector users reads hyqd channel matrix entries zero mean unit variance awg noise vector statistical theory quantization design precoder takes account effects need know statistical properties quantization especially auto properties gaussian input signal since quantization strong effects statistical properties signal statistical properties hard limiters dealing signals derived derivations applied signals introduced section covariance matrix unquantized circular distributed signal covariance matrix quantized signal given kcx diag cxq covariance matrix quantized circular distributed signals given cxq arcsin arcsin note diagonal entries cxq squared norm quantized signals lead following result diag cxq four equations basis solving optimization problem presented next section optimization problem optimization problem formulated follows arg min kyqd etx diagonal objective function aim minimizing mse desired signals received signals given power transmitted signal yqd limited available transmit power etx end following expression mse mse find three expectation terms make use covariance cross correlation matrices already mentioned input signal covariance matrix covariance matrix precoder output given yyh pph linear expression covariance matrix cyq make use approximation arcsin get pph cin constraint diag covariance matrix cyqd transmitted signal yqd given cyqd dcyq received signal covariance matrix reads hdcyq dhh look mse expression one terms need find since structure similar cyq end dhh diag finally putting expressions end following expression mse mse dhh mse expression contains two unknown variables intuitively function since reallocates power transmit signal originally intended gets lost due end define new matrix version row unit norm note mse expression contains product also contains products thus mse expression found far rather purpose remove introduced therefore obvious choice diag using fact diagonal matrix simplify constraint etx final optimization problem using finally write optimization problem words optimization respect reformulated one respect norm row etx diag pph pph min still terms mse need find two proceed solve note use calculate mentioned expectations expressed follows constraint simply expressed yqd dcyq etx solving optimization problem note objective function furthermore solution set satisfy constraint thus resort gradient projection algorithm solve optimization problem steps algorithm found table needed derivative mse respect given diag diag diag diag diag diag simulation results section compare proposed precoder different precoding schemes terms uncoded ber simulation results averaged channel realizations used modulation scheme qpsk antennas serve users transmit symbols per channel use tolerable error iteration step gradient projection algorithm set respectively fig uncoded ber simulated function available transmit power etx refers linear wiener filter precoder quantization applied system model linear wiener filter precoder take quantization account ber performance sensitivity inaccuracy implementation studied plotted fig seen even error ber performance degrade much compared ideal case table gradient projection algorithm iteration step tolerable error step let step let uncoded ber error error step mse mse terminate algorithm otherwise let return step transmit fig sensitivity analysis respect equal power allocation performed transmit power constraint still satisfied appropriate scaling denotes proposed precoder design quantized precoder gradient projection method refers proposed precoder design power allocation equal transmit antenns additional analog processing required qwp designates quantized wiener filter precoder introduced seen results ignoring distortions due quantization leads worst case scenario taking account significant improvement uncoded ber achieved performance improvement increased unequal power allocation transmit antennas deployed shown case qwf proposed precoder design outperforms designs iterative design converges solution different initial values analog diagonal precoder built within power amplifiers antenna fig shows distribution normalized diagonal coefficients observe deviation coefficients among different antennas channel realizations respect mean value max quite small therefore requirements terms dynamic range power amplifier still reasonable uncoded ber fig distribution diagonal coefficients channel realizations etx quant qwp conclusion fig ber comparison different precoding schemes general analog processing exhibits higher complexity lower accuracy compared digital counterpart due hardware implementations imperfections aging temperature analog precoder offers less complexity due positive diagonal structure since updated every coherence time present new mmse precoder design mitigate iui massive mimo scenario assuming perfect csit proposed precoder design takes account signal distortions due quantization transmitter receiver precoder split digital precoder separates users direction diagonal analog precoder power allocation antenna precoding method shows better performance terms uncoded ber compared precoder designed analog precoder involved proposed scheme updated every coherence time thus reducing implementation complexity furthermore ber performance insensitive imperfections analog precoder implementation references marzetta noncooperative cellular wireless unlimited numbers base station antennas wireless communications ieee transactions vol november bjornson kountouris debbah massive mimo small cells improving energy efficiency optimal coordination telecommunications ict international conference may peel hochwald swindlehurst technique multiantenna multiuser channel inversion regularization communications ieee transactions vol jan gershman sidiropoulos shahbazpanahi bengtsson ottersten convex optimizationbased beamforming signal processing magazine ieee vol may rusek persson buon kiong lau larsson marzetta edfors tufvesson scaling mimo opportunities challenges large arrays signal processing magazine ieee vol jan renzo haas ghrayeb sugiura hanzo spatial modulation generalized mimo challenges opportunities implementation proceedings ieee vol jan muller sedaghat fischer load modulated massive mimo signal information processing globalsip ieee global conference dec kalis kanatas papadias parasitic antenna arrays wireless mimo systems springer bjornson matthaiou debbah massive mimo arbitrary arrays hardware scaling laws design wireless communications ieee transactions vol svensson andersson bogner power consumption analog digital converters norchip conference nov mezghani ghiat nossek transmit processing low resolution electronics circuits systems icecs ieee international conference dec price useful theorem nonlinear devices gaussian inputs information theory ire transactions vol june bertsekas tsitsiklis parallel distributed computation numerical methods prenticehall joham optimization linear nonlinear transmit signal processing thesis lehrstuhl netzwerktheorie und signalverarbeitung technische
7
using tra driver external dynami pro ess observation extended abstra arxiv jan pierre deransart inria quen ourt chesnay cedex fran abstra one interested observation dynami pro esses starting tra whi leave one makes produ onsidered possible make several observations simultaneously using large variety independently developed analyzers purpose introdu original notion tra apture idea pro ess instrumented way may broad ast information whi ould ever requested kind observer analyzer full tra data elements whi needs approa uses alled tra driver whi ompletes tra drives answer requests analyzers tra driver allows restri information makes approa tra table side potential size full tra seems make idea full tra unrealisti work explore onsequen notion term potential ien analyzing respe tive workloads full tra many analyzers likely run true parallel environments illustrate study use example observation resolution onstraints systems sear propagation using phisti ated visualization tools developed proje oadymppac pro esses approa onsidered omputer programs believe extended many kinds pro esses introdu tion one interested observation dynami pro esses starting tra whi leave one makes produ onsidered possible make several observations simultaneously using large variety independently developed analyzers one wants observe pro ess pra instrument type observation whi one wants make one thus implements work partly supported oadymppac fren rntl proje new tra analyzer one adapts one work approa whi ompletes existing largely avoided one adopts start general onsists instrument produ full tra unique tra useful later observations whi one plan make analyzer full tra data elements whi needs approa uses driver whi alled tra ompletes tra drives answer requests analyzers approa parti ularly tempting pra full tra never needs ompletely expressed hanges information remain limited work tra implementation driver made evaluation terms feasibility performan remains however problemati approa allows redu size tra emitted useful bare minimum thus speed whole pro ess allows onsider large full tra also size tra beyond ompensation ost whi grows ertain size produ tion ost tra likely ome prohibitory pre isely question whi one interested one approa pra level without slowing pro ess essively get pre ise idea one must take ount time tra produ tion also tra used notion tra driver presented experimented ontext nite domain onstraint resolution question nature tra whose emission ontrolled tra driver dire tly led work explore onsequen notion term potential ien analyzing respe tive workloads full tra many analyzers likely run true parallel environment work introdu notion tra pro ess whi apture idea instrumented way may broad ast information ould ever requested kind observer analyze nature work tra driver distribution fun tions tra driver one hand analyzers hand allows better estimate powerful useful ient ept full tra hite ture involved provided ompanied right omponents illustrate study one take example observation resolution onstraints systems sear propagation ing sophisti ated tools visualization ording method developed proje discipl oadymppac eld parti ular interest ause tra lude representations ompli ated potentially bulky obje omputations evolution domain variables time logi sto hasti onstraints systems ause true omplex systems omplexity resolution lose extended abstra present essively epts full tra remental version question semanti analyze nally problem distribution work driven tra external analyzers whi work useful tra whi provided requests full tra introdu pro ess ept full tra nition full tra ontains one may like know exe ution ludes likely des ription pro ess pro ess given moment state hara terized enter framework arti exa tly state supposed des ribed nite set parameters value moment nth parameter ept moment spe hereafter urrent state denoted list values parameters also assumed transformation state another made steps hara terized tion set tions performed moment labelled ned ept full tra seems appli ation pra approximations important thing admit whatever level details whi one wishes observe pro ess always threshold whi makes possible tra one onsider ase program less thorough instrumentation whi produ tra words program augmented tra nition virtual full tra sequen tra events form virtual full tra unbounded omprising following elements unique identi event hrono time tra varies unit values always reasing distinguish time observed pro esses analyzers whi may monotonous ompared hrono parameters hrono tra event parameters alled attributes full urrent state parameters may last urs parameters attributes des ribe obje tions performed rea new state state event orrespond new rea hed state tion identi set state state tions hara terizing step tra tively produ pro ess regarded partial full tra pra one sees partial tra whi start moment pro ess observed presumably initial state limit summary single example tra tra prolog systems based standard prolog tra far full tra many useful information even easily available moment represented extra call call call exit call exit call ben ben ben byrd tra adopted majority prolog systems moment event orresponds laun hing resolution goal tra orresponds stage resolution event ontains following information two attributes whi indi ation depths rst depth sear ond port whi orresponds tion whi made possible rea stage thus port orresponds goals solve total indi ated subgoal installation exit ess subgoal ports ase detailed rst nes laun hing rst goals one resolution terminates tra nite unbounded otherwise last attribute gives subgoal solve tra hrono omprise identi ers hrono orresponds sequential order emission tra events resulting hrono plays role identi interesting note obje tive tra display steps evolution obtaining possible proofs port fail orresponds failure redo nondeterministi goal new resolution tried extend des ribe also sear however partial never expli itly des ribed sear tra thus provide parameters interest dire tly partial brings following observation parameters full state given expli itly tra attributes port goal whi possibly make possible onsider point later attributes thus give remental information whi makes possible obtain new resolution step say tra remental leads following parti ular tra whi without details nition dis ontinuous tive remental full tra tra dis ontinuous full tra whi ontains events whose variation essive hrono may higher unit holes either ause emission dis ontinuous ause observing proess listens asionally noti dis ontinuity erns tra emission extra tion words moment orresponds tivation tra tra tive full tra starting knowledge form one dedu denotes set attributes tive tra tra emitted tra whi tually virtual full tra parti ular attributes parti ular tion label ase tive tra parameters another ase full tra tra remental attributes urrent state noted form deltat hanges deltat ontains des ription tions whi modify values parameters ing moment remain full tra tra must satisfy following dition starting knowledge deltat one dedu extended summary distin tion absolutely essary one distinguish virtual tive tra one speak parameters attributes pra ally tra remental thus uses attributes pre eding example illustrates well ause emission full state tra event would obviously prohibitive size whi would take events would high tra would extremely redundant ondition imposes simply one retrieve full tra starting transmitted attributes pre eding full state pra observed pro esses produ partial tra ase retrieval full state impossible one wants full tra least omplete one essary ask tra provide least one full state observing pro ess needs take ient enable maintain ount partial state onsistent partial state hand needs given moment know full state least full omplete state tra provide least urrent state part pra ally tra dis ontinuous even often part essarily nite regarded nite single tra also raises problem knowledge initial state whi observed pro ess moment initial tra event hrono equals thus ommuni ation analyzers full initial state event tra two reasons justify one interested manner obtaining state urrent value parameter may exist pro ess requires small omputation extra hand may exist require partial ution pro ess used analyzers chip environment cosyte apa ity clpgui obliges stop exe ution observed pro ess least possible give ability observing pro ess stop resume observed pro ess leads idea sket hed introdu tion syn hronization primitives pro esses kind image freezing whi makes possible omplete information moment need also results need fun tion ess ording urrent state important note tive remental tra keeping full urrent state harge observing pro ess observed pro ess assumes however observed pro ess least overy points whi full ontains urrent state maintained essible allow observing pro ess resume tra able restart full urrent state important note full tive tra full virtual tra keep full pro ess additional urrent state observing harge pro ess ause tra observed pro ess obligation ulate requested parameters expli itly requested attributes supposes however observed pro ess whi full omputed ontains least overy urrent state preserved essible may allow observing pro ess resume tra able restart full urrent state aspe treated observational semanti one interested tion semanti full tra alled observational semanti tra explain anything olle tion question arises however understand tra one needs two levels semanti rst level orresponds des ription tions appearing parameters attributes tra semanti observed data ond level orresponds kind tra semanti des ribing values parameters moment derived values parameters moment tions whi sele ted clearly means one model pro ess urately des ribes evolution tra two events form semanti subje another work progress tra one needs rst level semanti relations parameters state attributes emitted tra events must known example prolog tra properties obje must known understand relation depth tree tree semanti tra full remental thus ontains des ription relations parameters whi relates semanti produ attributes understand tra omprehensively essary knowledge pro ess observational semanti semanti tra rst semanti although part seen like tra uses semanti obje tions full tra seen abstra semanti sense cousot dom fig rule abstra model prolog displayed kind natural semanti whi expressed nite set rules form condition event tra obtained starting state tion appli ation rule set tions performed hrono denoted semanti take form stru tural operational semanti evolving algebra less ned tempting omplete semanti tra order allow lear implementation coming example prolog tra augmented tra onstraints resolution sear propagation alled codeine implemented manner abstra model ned whi implemented several solvers made possible meet two prin ipal obje tives portability analyzers whose input data based full tra robustness tra ers whose implementation guided improved good methodologi approa based rigorous semanti useful however note would omplete semanti tra whi omplete formalization observed pro ess almost impossible pra ause degree nement would imply example attribute luded many tra cpu time onsumed pro ess sin beginning tra emission formalize variations issued host system would amount introdu ing observational semanti model system whi pro ess exe uted finally onfused whi would make possible sole initial state rules omplete operational semanti ourse pro ess starting urrent full state ient know whi rule may apply example model developed nite domain onstraints resolution des ribed least one rule ontains set operational rules applied state whose always pre ise enough ide onditions applied rule depi ted figure ontains new ondition dom whi must satis meaning urrent node belongs sear nothing says whi node must sele ted sole knowledge full urrent state whi ludes urrent sear ient knowledge urrent tra event essary know whi new node thus know rule instantiated tra driver evaluation idea use tra driver proposed sin origin prolog ontext logi programming regularly veloped tested various environments originality approa onsists proposing full tra ommuni ated tegrity analyzer one eives part tra whi relates instead broad asting urbi orbi full tra analyzer would task lter ltering performed sour level observed pro ess whi behaves like server tra analyzer lient whi restri indi ate observed pro ess tra whi needs indeed pro esses brings information omplete separation observed observing onsider hite ture lient hanges lient indi ates server tra needs server provides requires ltering tra longer arried analyzer observed pro ess task tra driver perform requested ltering dispat tra requested lients hite ture well hanges pro esses des ribed parti ular details extended abstra addition aspe related possibilities modulating tra emitted ording needs analyzers onsidered possibility previously mentioned syn hronize pro esses onsidered approa allows redu size tra emitted useful bare minimum thus speed whole pro ess onsider large full tra also tra beyond ompensation allows ost whi grows size ertain size produ tion ost tra likely ome prohibitory pre isely question whi one interested one approa pra level without slowing pro ess essively get pre ise idea one must take ount time tra produ tion also tra used one ould think rst sight observed pro ess produ full tra makes approa unrealisti server indeed slowed simple ompute great number parameters whi perhaps never used approa would thus penalizing pro ess instrumented expensive tra whenever weak portion full tra would used another side pre isely ase onsiderable onomy realized ause ltering sour transmission limited tra whose osts oding extremely redu idea pra emitting one small part tra ltered sour ompensates ost work mainly related updates parameters full tra analyzed pro ess hand many analyzers tivated simultaneously use given moment tra equivalent full tra problem produ full tra arise must ase produ question interest using tra driver worth posed show even ase driver save time additionally another type onomy must onsidered whi mentary previous one use remental tra tive tra instead virtual full tra ase loss information equivalent full tra still transmitted redu tion amount data transmitted kind used order limit size emitted data later ase tra performs kind analyzer kind task must taken ount analysis workloads shown performan tra analyzers uen kind load rather size full tra order pre ise essary analyze repartition work within various pro esses times take ount detailed performan analysis times erning pro ess tra pilot one hand times erning one analyzer hand one must suppose several analyzers running true parallelism pro ess tra driver ore ond analyser ode side pro ess tra driver tprog time devoted exe ution instrumented program instrumented dea tivated tra tcore additional exe ution time pro ess instrumented produ full tra tivated tra time devoted onstru tion elements essary likely later extra tion parameters urrent state time largely uen size full tra stage approa one omputations must performed ause presented onsiders must possible produ full state moment urrent ase parti ular dis ontinuous tra time related form full tra depend emitted parameters full tra already part pro ess time null tcond time king onditions ning tra emitted analyzer ltering time null ltering emission full tra textract omputing time parameters requested ltering time formatting tra oding possible ompression emission time spe textract driver tcond orresponds times namely tcore regarded times related tra noti approa driver hoi parameters attributes tra ontained tive ommuni ated information possibility uen ing form attributes example degree information oded form attributes nature attributes remental information part tra annot modi adapted tra driver side additional ompression algorithm used redu size information belongs stage generalizing idea one ould also onsider possibility put tra abstra attributes adapted spe redu size use whole tra orresponding attribute omputation time would thus related stage generalization studied side analyzer ilter time ltering analyzer time null ltering formed sour tra events sent parti ular analyzer indeed tagged sour pre aution mainly ause implemented tra driver many external analyzers lter tra however essary tdecode umvent time well must onsider ase time oding eived tra time impossible oding ompared ompletely eliminated used ause ommuni ation encode part ompression algorithms onsidered even times umulated one save substantial amount time transmission trebuild time rebuilding full tra starting tive tra omputation urrent parameters starting emitted attributes time must tributes ompared textract equilibrates favorably texec omputation time onsidered time even umulated textract ost tra emission exe ution time proper fun tions analyzer sophisti ated analyzers used data analysis example time ome important makes negligible one orresponding tra produ tion example codeine des ribed implements generation full tra analysis onstraints resolution extension urrent state ontains among attributes sear straints state variables domains full tra see alled proje generi generate full tra full omplete des ription tra even codeine ould possible codeine tra onsiderably simple byrd tra whi stri tly ontained remental tra generated onstraints variables states urrent state pro ess rebuilt later starting ost extra tion redu proper data management realized tra obtain ost urrent sear tree pro ess would uted partially therefore would essary freeze nently urrent exe ution ost management essible sear moment would size hand analyzer learly intra table ause using produ remental tra maintain obje permanently obtaining useful codeine also parameters ontains tra driver nitions tra emitted spe ation emitted tra stored data whi must provided pro ess starts times distributed follows side pro ess tra driver tprog exe part tcore ution time swit hes tra small tcore negligible duration general time onstru tion parameters full codeine tra ting data useful extra full tra kind tcond time ltering full tra sele textract omputing time attributes requested tra orresponding parameters requested ltering omputing time attributes orresponding quested parameters oding xml format prolog term emitted tra emission time remental tra side analyzers experiments framework oadymppac proje sophisti ated analyzers intensive visualization graphs visual data analyses revealed following ilter tdecode osts times intri ate synta xml full tra ltering part whi tra approa analysis module ould avoided ould taken ount analyzers built proje trebuild time onstru tion parameters full tra variables domains tive onstraints set sear time grows non mated tor ording size data sometimes related size tra onsiderable slow analyzer tra low speed analyzer may ause ase analysis tra strong slow observed pro ess texec time onstru tion obje visualized graphs data tables times grow exponentially ien used algorithms ording size data ial also ause slow pro ess pleads favor preliminary treatment information transmission example sele tion distinguished nodes put tra redu size drawn graphs olle groups variables unique attribute redu number lines matrix shown experimentally behavior codeine tra full tra ompares favorably behavior prolog byrd tra furthermore ltering realized tra driver prejudi performan give theoreti justi ation result showing approa already justi experimentally also justi theoreti ally langevine observes ltering tra together tive lter one omes running several automata together may ient running one indeed essen ltering relates simpli sele rst tra events ner additional ltering thus events redu one tra whose language onditions whose role ontaining ports requested analyzer arried thereafter number admit rst ltering relates orresponds regular language possible onsider lter regular expression whose ognition full tra filtering done using non deterministi automaton orresponds ognition task union many state automata tive analyzers requesting tra however resulting automaton optimized ient ient automata asso iated single analyzer union automata ient terms omputation steps one operations ltering extremely frequent ause apply tra events speed onsiderable position analyze respe tive workloads first observe respe tive times sides may onsidered mulative depending whether respe tive pro ess run sequentially true parallelism later essors situation whi ase pro ess run onsidered approa slowest pro ess must taken ount evaluate exe ution time whole system tprog tcore times one side texec side orrespond times spe tra analyzers slowest analyzer main uen exe ution time times tcore ilter ompressible tcond one side depends size virtual tra side least one times must null ltering sour performan improved whatever size full tra textract one side times trebuild ases time appears negligible ompared times side orrespond respe tively omputation attributes parameters ipro ally one redu extra tion time thanks ltering onsidering low number sele ted tra events attributes parameters work already luded tcore time omputation portant probably signi antly greater extra tion time tencodea ndc one side times tdecode side important interest lies pro realized ompensates ommuni ation time redu emitted volume largely oding times summarize tra slowest analyzer main tors uen ing whole performan sides rease workload onsiderably however use tra driver hniques make possible ompensate partially sometimes tive manner osts related use full tra con lusion introdu ept full tra order take possibilities analysis dynami ount multiple pro ess analyzed pro ess mented addition tra tra driver analyzers possibility addressing orders driver hite ture onsidered des ribe intera tions pro esses analyzers analyzed pro ess enabled kle problem evaluation approa terms ien tried appre iate introdu tion tra driver global ien analysis dynami ould improve pro ess using several analyzers observing pro ess simultaneous way initially observed full tra ould parti ularly expensive produ part ould transferred without loss apa ity analysis full tra retrieved analyzers analyzers unfavorable ien ost ase term orresponds situation equivalent full tra must built extra ted emitted ase full tra union redu tra requested several analyzers showed even ase ltering realized tra driver able bring important bene observations proje discipl oadymppac using sophisti ated analyzers powerful tools visualizations well also showed limits performan analyzers tra even full tra study opens nally series questions ame often deep possible implement broad full tra ourse limits approa obvious ept full tra meaningful regards family possible analyses well known sele ted advan nothing guarantees priori additional instrumentation observed pro ess never essary beyond aspe interesting question relates feasibility implementation grained full tra one side indeed one able ompensate ertain produ tion osts large tra ting tive tra limited size side analyzers whose use temporary eptional loaded time full tra essary even uses part asionally slow observed pro ess whi intera tion language use whi dialogue driver analyzers question little kled mainly remains ope arti tenden however use language like xml way hosen oadymppac proje possibility redu ing tra bare minimum ourages nevertheless experiments showed data ommuni ation needed optimized ompression spe tra ombining usual methods ompression like using remental also introdu ing abstra attributes tra leads idea dialogue involved pro esses limited hoi tra events attributes tra apa ities syn hronization must extended allows also uen design attributes finally ial question related omprehension tra understand tra des ribe semanti semanti tra tra least large part given priori ause one model observed pro ess omprehension tra well implementation tra largely ilitated opposite arti ial ase ase many natural omplex pro esses one vast tra study even elds programming languages semanti seems better ontrolled priori one tries analyze program behavior trying understand tra thus one sees stinging usefulness general hniques based data mining web mining purposes however one must ognize full tra probably always lude portions aping kind des ription based formal semanti referen deransart outils dynamique pour programmation par contraintes oadymppac hni report inria quen ourt ole des mines nantes insa rennes cosyte ilog projet rntl http langevine tra driver hybrid exe ution analyses press pro eedings automated debugging symposium langevine tra driver versatile dynami analyses constraint logi programs serebrenik eds pro eedings workshop logi methods programming environments onferen workshop sitges spain computer resear repository langevine tra driver clp debugging monitoring visualization exe ution single tra demoen lifs hitz eds pro int conf logi programming number lncs fran deransart hermenegildo ski eds analysis visualisation tools constraint programming number lncs springer verlag clp system based standard prolog iso developed diaz http distributed gnu ense byrd understanding ontrol prolog programs logi programming workshop cousot abstra interpretation hnique ien informatiques kahn natural semanti brandenburg wirsing eds pro eedings stacs springer also inria natural semanti computer plotkin stru tural approa operational semanti daimi computer ien department aarhus university aarhus denmark gurevit evolving algebras tutorial introdu tion bulletin european asso iation theoreti computer ien observational semanti dynami analysis computational proess ole polyte hnique laboratoire lix palaiseau fran langevine deransart propagation tra formal nition ient implementation palamidessi proeedings international conferen logi programming mumbai india langevine deransart generi tra hema portability debugging tools apt fages rossi van eds ent advan constraints number ture notes arti ial intelligen springer verlag sele ted papers joint international workshop constraint solving constraint logi programming langevine deransart rigorous design tra ers experiment onstraint logi programming ronsse boss eds pro eedings international workshop automated algorithmi debugging ghent belgium computer resear repository logi programming environments dynami program analysis debugging journal logi programming tra hemata dynami analysis choi ryder zeller eds dagstuhl seminar understanding program dynami fran dagstuhl germany hop roft ullman eds introdu tion automata theory languages computation computer ien baudel visual referen manual manufa tured distributed ilog http ghoniem jussien fekete visexp visualizing constraint solver dynami using explanations barr markov eds pro int florida arti ial intelligen resear iety conferen aaai press denmat ridoux data mining king exe ution tra reinterpretation jones harrold satsko test information visualization irisa rennes fran zaidman calders demeyer paredaens applying webmining hniques exe ution tra support program comprehension pro ess iety pro eedings european conferen software maintenan rengineering csmr man hester
2
sample efficient policy search optimal stopping domains may karan goel carnegie mellon university christoph dann carnegie mellon university cdann abstract optimal stopping problems consider question deciding stop process order maximize return examine problem simultaneously learning planning domains data collected directly environment propose gfse simple flexible policy search method reuses data sample efficiency leveraging problem structure bound sample complexity approach guarantee uniform convergence policy value estimates tightening existing pac bounds achieve logarithmic dependence horizon length setting also examine benefit method prevalent approaches domains taken diverse fields introduction sequential decision making learning unknown environments commonly modeled reinforcement learning key aspect artificial intelligence important subclass optimal stopping processes agent decides step whether continue terminate stochastic process reward upon termination function observations seen far many common problems computer science operations research modeled within setting including secretary problem ferguson house selling glower lippman mccall american options trading jacka mordecki product pricing feng gallego asset replacement jiang powell well problems artificial intelligence like mission monitoring robots best metareasoning value additional computation zilberstein automatically deciding purchase airline ticket etzioni often stopping process dynamics unknown advance finding good stopping policy halt requires learning experience environment real experience incur real losses desire algorithms quickly minimal samples learn good policies achieve high reward problems emma brunskill stanford university ebrun interestingly prior work optimal stopping focused planning problem compute policies given access dynamics reward stochastic stopping process peskir shiryaev optimal stopping problems also framed partially observable markov decision process pomdp also exists work learning good policy acting pomdps bounds number samples required identify near optimal policy class policies kearns jordan however work either makes strong assumption algorithm access generative model ability simulate state stochastic process makes work suited improving efficiency planning using simulations domain use trajectories directly collected environment incurs exponential horizon dependence paper consider quickly learn nearoptimal policy stochastic optimal stopping process unknown dynamics given input class policies assume fixed maximum length horizon acting make simple powerful observation stopping problems rewards outcomes full length trajectory trajectory policy halts entire horizon provide estimated return halting one step two steps till horizon way single trajectory yields sample return stopping policy based propose algorithm first acts stopping full length horizon number trajectories performs policy search input policy class full length trajectories used provide estimates expected return policy considered policy class policy set highest expected performance selected future use provide sample complexity bounds number full length trajectories sufficient identify near optimal policy within input policy class results similar general results pomdps kearns jordan due structure optimal stopping achieve two key benefits bounds dependence horizon logarithmic instead linear generative model exponential without results apply learning stochastic stopping processes generative model required simulation results student tutoring ticket purchase asset replacement show approach significantly improves approaches problem formulation consider standard stochastic optimal stopping process setting tsitsiklis van roy assume stochastic process generates observations may vectors two actions halt continue process reward model known deterministic function sequence observations choice whether continue halt exist domains reward model nondeterministic function observations actions medical procedure reveals patient true condition sequence waiting common optimal stopping problems fall within framework considered including secretary problem quality secretary directly observed house selling price house bidder known asset replacement published guides worth asset plus knowledge cost buying new one etc focus episodic setting fixed maximum time horizon process finite horizon value policy expected return following horizon steps expectation taken stochastic process dynamics note policy may choose halt steps goal maximize return across episodes focus direct policy search methods see sutton precisely assume input parameterized policy class set policy parameters direct policy search require building model domain successful variety reinforcement learning contexts deisenroth rasmussen levine abbeel sample efficient policy search particularly interested domains evaluation policy incurs real cost environment stock market options selling settings wish find sample efficient methods policy search minimize number poor outcomes real world challenge know stochastic dynamics possible advance acting perform policy search identify good policy instead obtain information domain dynamics executing policies real world seek efficiently leverage experience quickly make good decisions present simple approach gfse gather full search execute algorithm sample efficient policy search gfse collects set horizon trajectories uses evaluate performance policy input policy class identifies good policy executes resulting policy future episodes key insight first step gathering data used evaluate performance policy policy class monte carlo estimation used estimate expected return policy running many times however scales poorly cardinality policy class algorithm gather full search execute gfse input policy class search method use theorem gather full trajectories environment identify policy using uses execute building dynamics model set data efficient model used simulate performance policy requires make certain assumptions domain markov property lead biased estimates alternatively importance sampling used evaluation precup unfortunately estimates tend high variance however simple powerful observation fullhorizon trajectory used yield sample return optimal stopping policies given full length trajectory performance particular policy simulated providing target policy halts time step therefore take subsequence observations use directly compute return would observed executing trajectory single trajectory provide one sample return policy set trajectories used provide sample returns given policy thereby providing empirical estimate policy evaluation policy class prior work shown given access generative model domain policy search done efficient way using common random numbers evaluate policies act differently episode kearns jordan setting trajectory essentially equivalent access generative model produce single return policy however access full length trajectory obtained running environment whereas generic generative models typically require teleporation ability simulate would happen next particular action given arbitrary prior history hard unless planning scenario one already knowedge dynamics process results require weaker assumptions prior results use stronger generative models obtain similar sample efficiency also achieving better sample efficiency approaches access similar generative models shortly provide sufficient condition number full length trajectories guarantee evaluate policy sufficiently accurately enable policy search identify policy within input policy class course empirically often wish select smaller simulation experiments demonstrate often small still enables identify good policy theoretical analysis provide bounds sample complexity gfse number full length trajectories required obtain near accurate estimates policies policy class ficient identify optimal policy policy class highest expected return first note optimal stopping problems consider paper viewed particular instance pomdp briefly hidden state space dynamics model determines current state transitions new state stochastically given continue action observation function hidden state reward also function hidden state action main result given policy class sample complexity scales logarithmically horizon make assumption access generative model significant improvement prior sample complexity results policy search generic pomdps large mdps kearns jordan required access generative model environment sample complexity scaled linearly horizon results thought bounding time required planning one access generative model used sample outcome reward observation given prior history action contrast results apply learning agent generative model domain must instead explore observe different outcomes without generative model domain sample complexity results policy search generic pomdps learning scale exponentially horizon kearns optimal stopping trajectories related trajectory trees kearns used evaluate returns different pomdp policies pomdp actions trajectory tree complete binary tree depth rooted start state nodes tree labeled state observation path root node tree denotes series actions taken policy trajectory tree used evaluate policy since every action sequence part tree however generic pomdps size trajectory tree exponential horizon optimal stopping problems tree size linear horizon figure allows obtain significantly tighter dependence generic pomdps analysis closely follows prior sample complexity results kearns kearns proceeded first considering bound viewed set mappings histories returns function viewed mappings histories actions use result bound sample complexity needed get estimates returns policies policy class follow similar procedure bound sample complexity contains potentially infinite number deterministic let policy class since optimalstopping policy maps trajectories actions binary labeling let vcr viewed set mappings full trajectories returns similar kearns results extend finite infinite stochastic well discounted case using horizon figure structure full trajectory horizon node represents observation arrows represent one two available actions assume bounded vmax vapnik kotz know vcr computed vmax otherwise return full trajectory lemma let set deterministic policies viewed set maps trajectories actions viewed set maps space full trajectories vmax dimension bounded vcr log proof proof proceeds similarly lemma kearns crucial difference policies operate structure contains nodes figure rather kearns trajectory trees nodes setting point agent gets consider whether halt continue halt action chosen trajectory terminates implies contrast standard expectimax trees size tree depends action space exponential horizon setting dependence induced actions linear thus produce much smaller set behaviors dependence logarithmic rather polynomial formally sauer lemma trajectories lad beled atmost ways first note full trajectories contain distinct trajectories across one per node refer figure structure full trajectories action labeling trajectories corresponds selecting paths path per full trajectory path starts first observation ends terminal node number possible selections thus atmost enh path viewed mapping full trajectory return selection therefore maps full trajectories returns terminal nodes across full trajectories thus distinct returns full trajectories set indicator threshold equal returns turn would atmost enh distinct binary labelings full trajectories thus set indicator functions define vcr generate atmost enh distinct labelings full trajectories shatter full trajectories set enh result follows proceed similarly theorem kearns theorem let potentially infinite set deterministic optimal stopping policies let let full trajectories collected environment let value estimates using let return bounded vmax trajectory vmax log log probability least holds simultaneously proof let space full trajectories every policy bounded map vmax let full trajectories generated environment dynamics using result vapnik kotz pwe probability vcr log vmax substitute vcr log inequality get result practice may impossible evaluate every policy select one best estimated mean cases use different search method algorithm find local optima using bound ensure policy values estimated accurately lastly discuss tsitsiklis van roy estimate values markov optimal stopping problems using linear combination basis functions use find threshold policy outline procedure tune basis function weights asymptotically guarantees policy value convergence best approximation assumptions construct policy class using basis functions inherit useful convergence results relying search procedure along retaining finite sample complexity results experiments demonstrate setting consider sufficiently general capture several problems interest approach gfse improve performance optimal stopping problems baselines ticket purchase many purchasing problems posed optimal stopping process return stopping simply advertised cost consider deciding purchase airline ticket later trip date order minimize cost opaque way prices set competitive pricing makes domain difficult model prior work etzioni groves gini focused identifying features create sophisticated models make good purchase decisions surprisingly hard improve earliest purchase baseline buys first observation use data groves gini collected real pricing data fixed set routes period years querying travel sites regularly collect price information route several departure dates distributed year period price observation sequence length table mean expenditure deploying different policies test set ticket purchase earliest purchase buys immediately latest purchase waits till departure date method earliest purchase latest purchase best possible price customer could commence ticket search point sequence customer starts days departure another week thus consider commencement points separately get distinct full trajectories similar groves gini construct parameterized policy class based ripper decision rules etzioni wait curr price days depart else buy buy corresponds halting also constructed complex class parameters learns different price thresholds depending far departure date consider nonstop flights routes separately method gfse collects full length trajectories first days trajectories uses construct single stopping policy performs simple policy search sampling evaluating policies randomly policy space uses best identified policy simulate ticket purchasing decisions departure dates occurring remaining part years trajectories restrict data departure dates contain price results test sets shown table policy search method succeeds finding policy leads improvement difficult earliest purchase baseline improvements line prior approaches specifically designed particular results highlight setting capture important purchasing tasks approach even simple policy search find policies significantly better performance competitive baselines tutoring asset replacement consider simulated domains compare gfse several approaches learning act quickly domains unless specified results averaged rounds error bars indicate confidence intervals baselines one natural idea proceed gfse use gathered data build parametric domain models used estimate performance potential policies found shorter trajectories collected close departure date prices fluctuate illustrative policy classes inadequate cases method adopted earliest purchase policy unfortunately authors unable provide split used groves gini call approaches second idea consider initial set collected data budget free exploration instead use budget monte carlo onpolicy evaluation set policies course exploration gfse always optimal also consider approach quickly identifying global optima function function initially unknown function evaluation expensive bayesian optimization multiple papers shown used speed online policy search reinforcement learning tasks wilson deisenroth rasmussen given policy class selects policy evaluate step maintains estimates expected value every policy use yelp moe yelp gaussian kernel popular expected improvement heuristic picking policies picked separate optimization find estimates simulated student learning first consider simulated student tutor domain number tutoring systems use mastery teaching student provided practice examples estimated mastered material optimal stopping problem time step observing whether student got activity correct tutor decide whether halt continue providing student additional practice halting student given next problem sequence objective maximize score posttest giving problems possible overall popular literature model student learning using bayesian knowledge tracing bkt model corbett anderson bkt hidden markov model hmm state capturing whether student mastered skill within hmm probabilities prior mastery transition mastery guess slip describe model simulate student data fix bkt generate student trajectories using bkt model problems gfse consider two policy classes halt probability student next response according model use correct crosses threshold thus halt threshold fact policies kind widely used commercial tutoring systems koedinger use bkt model implement policy class parameterized contains possible instantiations parameters approach search also consider policy class based another popular educational data mining model student learning additive factors model afm draney afm logistic regression model used predict probability student get next problem correct given past responses thus number correct past attempts first note gfse significantly effective results hold instantiations parameters well see ritter reasonable parameter settings figure comparison best policy found varying budget matched model setting model mismatch setting taking budget exploration using evaluate policy manner using monte carlo estimation precisely sample policies bkt policy class fix budget trajectories gfse uses trajectories evaluates policies runs every policy trajectories selects one highest mean performance averaging results across separate runs found gfse identifies much better policy chose poor policies mislead potential performance policy due limited data also explored performance building model domain setting model matches true domain bkt model case policy class based student afm model match bkt process dynamics use maximum likelihood estimation fit assumed model parameters given collected data separately optimize threshold parameters compare varying budget gfse results averaged trials results experiment shown figure approach well settings quickly finding near optimal policy one would expect approach well matched model setting making full use knowledge underlying process dynamics however fitting mismatched afm model approach suffers noted prior work mandel procedures focus maximizing likelihood observed data rather trying directly identify policy expected perform well find good policy takes samples since online approach whereas gfse uses fixed budget exploration also compare averaged cumulative performance variants gfse figure mimics scenario care online performance every individual trajectory rather access fixed budget deploying policy method choose collect less full trajectories finding best policy interestingly use trajectories initial budget collect full length trajectories gfse meets exceeds performance setting matched mismatched model cases within trajectories suffers highly stochastic returns policies setting efficient data reuse also consider variant evaluate proposed figure average cumulative performance simulated student domain matched model collects initial trajectories identifying executing best policy figure cumulative performance augmented methods matched model policy online also using previously collected trajectories yielding robust estimate policy performance similarly gfse deploy policy using initial budget trajectories use trajectory addition earlier trajectories rerun policy search identify another policy next time step figure shows improved methods especially mismatch case approaches still performing best asset replacement another natural problem falls optimal stopping problem replace depreciating asset car machine etc simulation use model described jiang powell variants model widely used field feldstein rothschild rust model observations dimensional vectors form asset starts fixed valuation xmax depreciates emitting observations every time step reward function used incorporates cost replacement increases time utility derived asset penalty asset becomes worthless replacement use experiments construct logistic threshold policy class replacing asset depr total depreciation xmax seen far normalized lie addition approaches seen also include baseline policies choose replace asset immediately never replace lastly include optimal value known hindsight reference results shown fig surprisingly method details model found jiang powell figure results asset replacement outperforms competing methods considerable margin appears chosen policy class tricky optimize policies space perform poorly random policies chosen space mean cost around confidence interval however domain noisy robust value estimation requiring less trajectories see figure enables method consistently find good policy even low budget one corresponds replacing asset depreciation around improves slowly either sampling bad policies due sparse nature space disbelieving estimate good policy due bad policies surrounding manually adjusting hyperparameters account improve performance significantly discussion conclusion gfse performed well outperforming algorithms common baselines variety simulations important domains randomly searched policies relatively simple policy classes illustration sophisticated search methods policy classes could employed without effecting theoretical guarantees derived another extension using shorter trajectories terminate horizon policy evaluation similar full trajectories used useful scenario get trajectories using best policy found gfse rerun policy search trajectories full length short collected far policy value estimates biased case since policy halts earlier shorter trajectory use evaluation values policies halt later may overestimated higher variance estimation due fewer trajectories biasing pick number evaluations per policy exceeds number theorem estimates would remain within true values high probability would minimize effect bias saw figure works well empirically summarize introduced method learning act optimal stopping problems reuses full length trajectories perform policy search theoretical analysis empirical simulations demonstrate simple observation lead benefits sample complexity practice acknowledgments appreciate financial support nsf bigdata award google research award yahoo gift references best graeme best wolfram martens robert fitch spatiotemporal optimal stopping problem mission monitoring stationary viewpoints robotics science systems corbett anderson albert corbett john anderson knowledge tracing modelling acquisition procedural knowledge user model deisenroth rasmussen marc deisenroth carl rasmussen pilco dataefficient approach policy search icml pages draney karen draney peter pirolli mark wilson measurement model complex cognitive skill cognitively diagnostic assessment pages etzioni oren etzioni rattapoom tuchinda craig knoblock alexander yates buy buy mining airfare data minimize ticket purchase price kdd pages feldstein rothschild martin feldstein michael rothschild towards economic theory replacement investment econometrica journal econometric society pages feng gallego youyi feng guillermo gallego optimal starting times sales optimal stopping times promotional fares management science ferguson thomas ferguson solved secretary problem statistical science pages glower michel glower donald haurin patric hendershott selling time selling price influence seller motivation real estate economics groves gini william groves maria gini optimizing airline ticket purchase timing acm tist jacka jacka optimal stopping american put mathematical finance jiang powell daniel jiang warren powell approximate dynamic programming algorithm monotone value functions operations research kearns michael kearns yishay mansour andrew approximate planning large pomdps via reusable trajectories nips pages koedinger kenneth koedinger emma brunskill ryan sjd baker elizabeth mclaughlin john stamper new potentials intelligent tutoring system development optimization magazine levine abbeel sergey levine pieter abbeel learning neural network policies guided policy search unknown dynamics nips pages lippman mccall steven lippman john mccall economics job search survey economic inquiry mandel travis mandel liu sergey levine emma brunskill zoran popovic offline policy evaluation across representations applications educational games aamas pages international foundation autonomous agents multiagent systems mordecki ernesto mordecki optimal stopping perpetual options processes finance stochastics jordan andrew michael jordan pegasus policy search method large mdps pomdps uai peskir shiryaev goran peskir albert shiryaev optimal stopping problems springer precup doina precup eligibility traces offpolicy policy evaluation computer science department faculty publication series page ritter steven ritter thomas harris tristan nixon daniel dickison charles murray brendon towle reducing knowledge tracing space educational data mining pages rust john rust optimal replacement gmc bus engines empirical model harold zurcher econometrica journal econometric society pages sutton richard sutton david mcallester satinder singh yishay mansour policy gradient methods reinforcement learning function approximation nips pages tsitsiklis van roy john tsitsiklis benjamin van roy optimal stopping markov processes hilbert space theory approximation algorithms application pricing financial derivatives ieee transactions automatic control vapnik kotz vladimir naumovich vapnik samuel kotz estimation dependences based empirical data volume new york wilson aaron wilson alan fern prasad tadepalli using trajectory data improve bayesian optimization reinforcement learning journal machine learning research yelp yelp metric optimization engine https zilberstein shlomo zilberstein operational rationality compilation anytime algorithms magazine
2
exact learning lightweight description logic ontologies exact learning lightweight description logic ontologies boris konev konev department computer science university liverpool united kingdom sep carsten lutz clu department computer science university bremen germany ana ozaki anaozaki department computer science university liverpool united kingdom frank wolter wolter department computer science university liverpool united kingdom editor abstract study problem learning description logic ontologies angluin framework exact learning via queries admit membership queries given subsumption entailed target ontology equivalence queries given ontology equivalent target ontology present three main results ontologies formulated two relevant versions description logic learned polynomially many queries polynomial size case ontologies formulated description logic even acyclic ontologies admitted ontologies formulated fragment related web ontology language owl learned polynomial time also show neither membership equivalence queries alone sufficient cases keywords exact learning description logic complexity introduction many subfields artificial intelligence ontologies used provide common vocabulary application domain interest give meaning terms vocabulary describe relations description logics dls prominent family ontology languages long history goes back brachman famous knowledge representation system early brachman schmolze today several widely used families dls differ expressive power computational complexity intended application important ones alc family aims high expressive power family baader aims provide scalable reasoning family calvanese artale tailored specifically towards applications data access world wide web committee standardised alc family ontology boris konev carsten lutz ana ozaki frank wolter language web called owl standard updated owl since comprises family five languages including owl profiles owl owl owl owl based owl owl closely related fragment obtained allowing concept names side concept inclusions paper study dls families designing ontology application domain subtle time consuming task beginnings research driven aim provide various forms support ontology engineers assisting design ontologies examples include ubiquitous task ontology classification baader reasoning support debugging ontologies wang schlobach support modular ontology design stuckenschmidt checking completeness modelling systematic way baader aim pursued field ontology learning goal use machine learning techniques various ontology engineering tasks identify relevant vocabulary application domain cimiano wong learn initial version ontology refined manually borchmann distel distel learn concept expressions building blocks ontology lehmann hitzler see recent collection lehmann related work section end paper details paper concentrate learning full logical structure description logic ontology starting point observation building ontology relies successful communication ontology engineer domain expert former typically sufficiently familiar domain latter rarely expert ontology engineering study foundations communication process terms simple communication model analyse within model complexity constructing correct complete domain ontology model rests following assumptions domain expert perfect knowledge domain able formalise communicate target ontology constructed domain expert able communicate vocabulary predicate symbols case dls take form concept role names shares ontology engineer ontology engineer knows nothing else domain ontology engineer pose queries domain expert domain expert answers truthfully main queries posed ontology engineer form concept inclusion entailed addition ontology engineer needs way find whether ontology constructed far called hypothesis ontology complete requests example illustrating incompleteness engineer thus ask ontology complete return concept inclusion entailed exact learning lightweight description logic ontologies interested whether target ontology constructed polynomially many queries polynomial size polynomial query learnability even better overall polynomial time polynomial time learnability cases polynomial size ontology constructed plus size counterexamples returned domain expert without taking account latter one never expect achieve polynomial time learnability domain expert could provide unnecessarily large counterexamples note polynomial time learnability implies polynomial query learnability converse false polynomial query learnability allows ontology engineer run computationally costly procedures posing queries model instance angluin framework exact learning via queries angluin context queries mentioned point called membership queries queries point form equivalence queries angluin framework however queries slightly general hypothesis ontology equivalent target ontology return concept inclusion entailed positive counterexample vice versa negative counterexample upper bounds polynomial learnability results admit queries restricted form point learning algorithm designed way hypothesis ontology consequence target ontology times thus meaningful equivalence query query form already complete lower bounds results saying polynomial learnability impossible contrast apply unrestricted equivalence queries assume hypothesis implied target way achieve maximum generality within setup outlined study following description logics member family admits role inclusions allows nested existential quantification side concept inclusions extension horn conjunction side concept inclusions basic member family fragment ellhs concept names compound concept expressions admitted side concept inclusions remark closely related owl based fragment allow nested existential quantification side concept inclusions restricted case though polynomial learnability uninteresting fact number concept inclusions formulated fixed finite vocabulary bounded polynomially size instead infinite description logics studied paper consequently tboxes trivially learnable polynomial time even membership queries equivalence queries available vice versa extension horn part owl standard admitting conjunctions side concept inclusions useful widely considered boris konev carsten lutz ana ozaki frank wolter polynomial query learnable horn polynomial query learnable elrhs polynomial time learnable ellhs figure summary main results extension basic dialects see example artale ellhs significant part owl language viewed natural fragment datalog even better approximation owl would extension ellhs inverse roles polynomial learnability language remains open problem finally unrestricted viewed logical core owl language introducing preliminaries section study exact learning ontologies section establishing polynomial query learnability strengthen result horn section using significantly subtle algorithm remains open whether horn admit polynomial time learnability algorithms yield stronger result since use subsumption checks analyse counterexamples provided oracle integrate current hypothesis ontology subsumption dls kikot section show ellhs ontologies learnable polynomial time result extends known polynomial time learnability propositional horn formulas angluin correspond ontologies without existential restrictions fact algorithms take inspiration learning algorithms propositional horn formulas combine underlying ideas modern concepts canonical models simulations products algorithm ellhs also uses subsumption checks case get way polynomial time learnability since subsumption ellhs decided polynomial time section establish ontologies polynomial query learnable note fragment elrhs symmetric ellhs admits concept names side concept inclusions fragment together upper bounds ellhs thus establish failure polynomial query learnability ontologies caused interaction existential restrictions sides concept inclusions interestingly result already applies acyclic tboxes disallow recursive definitions concepts rather restricted syntactic form however result rely concept inclusions counterexamples form allowed acyclic tboxes also show ontologies formulated horn ellhs neither polynomial query learnable membership queries alone equivalence queries alone corresponding results propositional horn formulas found frazier pitt angluin angluin see also arias figure summarises main results obtained paper section provide extensive discussion related work exact learning logical formulas theories close paper discussion open problems small number proofs deferred appendix exact learning lightweight description logic ontologies preliminaries introduce description logics studied paper consider representation concept expressions terms labelled trees show important semantic notions subsumption concept expressions characterised homomorphisms corresponding trees also involves introducing canonical models important tool throughout paper finally formally introduce framework exact learning description logics let countably infinite set concept names denoted upper case letters etc let countably infinite set role names disjoint denoted lower case letters etc concept role names regarded unary binary predicates respectively description logic constructors used define compound concept role expressions concept role names paper role constructor inverse role constructor expression inverse role semantically represents converse binary relation role expression role name inverse role set role name brevity typically speak roles rather role expressions concept constructors used paper everything conjunction qualified existential restriction formally concept expressions defined according following syntactic rule concept name role example denotes class individuals child whose gender male terminological knowledge captured finite sets inclusions concept expressions roles specifically concept inclusion takes form concept expressions role inclusion takes form roles ontology tbox finite set cis use abbreviations two cis likewise speak concept equivalences ces role equivalences res respectively description logic literature cis form introduced often called cis distinguish cis use concept expressions formulated description logics tboxes called lih tboxes tboxes consist cis ris boris konev carsten lutz ana ozaki frank wolter example consider following tbox prof graduate graduatestudent student graduate graduatestudent supervisor advisor graduate degree line states every professor supervises students conducts research notice specify specific area research hence use unqualified existential restriction form line defines graduate anyone degree line defines graduate student student graduate line states graduate students supervised professors notice use inverse role supervisor line shows states every supervisor advisor last line defines computer science graduate someone degree computer science signature set concept role names use denote signature tbox set concept role names occur size concept expression length string represents concept names role names considered length one size tbox defined semantics concept expressions tboxes defined follows baader interpretation given set domain mapping maps every concept name subset every role name subset interpretation inverse role given interpretation concept expression defined inductively exists interpretation satisfies concept expression empty satisfies written similarly satisfies written model tbox satisfies cis ris tbox entails symbols satisfied every model concept expressions equivalent written equivalence roles defined accordingly written tboxes logically equivalent symbols vice versa exact learning lightweight description logic ontologies deg deg stud grad grad stud con res grad grad prof deg sup adv figure illustration example example consider tbox example interpretation illustrated figure defined setting prof studenti graduate studenti graduatei graduatei conduct researchi supervisor advisor degreei degreei easy see model moreover graduate graduate graduatei graduatei thus graduate graduate shown graduate graduate decide given tbox concept inclusion whether baader reasoning problem known subsumption high complexity profiles owl based syntactically restricted description logics subsumption less complex next introduce relevant logics basic concept concept name concept expression form role example basic concept takes form basic concept concept expression inclusion tbox finite set inclusions example lines example inclusions line abbreviates two cis graduate graduate lines fall within horn extension horn cis take form basic concepts concept expression horn tbox finite set horn cis ris horn investigated detail artale boris konev carsten lutz ana ozaki frank wolter example lines example fall within also fall within horn line falls within horn line horn concept expression concept expression use inverse roles concept inclusion form concept expressions tbox finite set cis thus neither admit role inclusions inverse roles contrast horn however allows existential restrictions side cis example inclusions example inclusions inclusion inclusion subsumption horn see kikot lower bound calvanese artale upper bound subsumption ptime baader still true ris use inverse roles admitted tbox given tbox deciding whether possible ptime description logics considered paper fact exists sequence roles every either learning algorithms carry various subsumption checks subprocedure detailed later tree representation concept expressions achieve elegant succinct exposition learning algorithms convenient represent concept expressions finite directed tree whose nodes labelled sets concept names whose edges labelled roles describe manipulations concept expressions terms manipulations corresponding tree merging nodes replacing subgraphs modifying node edge labels etc generally use denote root node tree detail defined follows tree single node label concept name single node obtained adding new root edge root label call obtained identifying roots example student student three nodes root successor successor labelling nodes given student graduate student labelling edges given degree see figure left exact learning lightweight description logic ontologies grad stud grad con res deg stud prof sup figure illustration examples left right conversely every labelled finite directed tree described form gives rise concept expression following way single node labelled treat empty conjunction inductively let root labelled let successors let assume cdm concept expressions corresponding subtrees roots respectively example let tree root labelled prof successors labelled graduate respectively edge labelling given conduct research supervisor see figure right follows always distinguish explicitly tree representation allows speak example nodes subtrees concept expression one important use tree representation concept expressions truth relation entailment characterised terms homomorphisms labelled trees interpretations mapping tree corresponding concept expression interpretation homomorphism implies every concept name implies role names following characterisation truth relation means homomorphisms lemma let interpretation concept expression homomorphism mapping proof straightforward induction structure see example baader details example consider interpretation example tree representations concept expressions given figure seen functions defined homomorphisms lemma student student prof boris konev carsten lutz ana ozaki frank wolter also standard characterise subsumption relation subsumption relative empty tbox means homomorphisms tree representations homomorphism labelled tree labelled tree mapping nodes nodes implies every concept name implies every role lemma let concept expressions homomorphism maps direction essentially consequence lemma fact composition two homomorphisms homomorphism direction one consider interpretation apply lemma refer baader details next characterise subsumption presence tboxes terms homomorphisms achieve make use canonical model concept expression tbox want viewed interpretation denote rather precisely domain set nodes iff concept names iff roles names call root root obtained extending cis satisfied example single node concept role names distinct define add node set general defined limit sequence interpretations inductive definition sequence assume defined obtain applying one following rules take interpretation add identifying root detail assume define setting concept names role names aic sin define except sin role name otherwise role name define except assume rule application fair rule applicable certain place indeed eventually applied rule applicable set obtain setting concept names role names exact learning lightweight description logic ontologies grad con res sup grad sup adv stud con res stud con res sup prof prof grad sup adv con res sup adv prof grad deg stud grad sup adv sup adv prof con res sup adv prof figure canonical model construction example note interpretation obtained limit might following example illustrates definition example consider following tbox prof prof graduate graduate supervisor advisor concept expression prof figure illustrates steps canonical model construction canonical model following lemma provides announced characterisation subsumption presence tboxes lemma let tbox concept expression model following conditions equivalent every concept expression homomorphism maps exact shape depends order rule applications however possible resulting interpretations homomorphically equivalent consequence order rule application important purposes boris konev carsten lutz ana ozaki frank wolter proof completely standard see example give highlevel overview using construction hard show model implies follows lemma one show model homomorphism maps fact one constructs homomorphism interpretations built construction hard analysing rules applied construction homomorphism built extends thus take unions homomorphisms obtain homomorphism remains compose homomorphisms apply lemma going use canonical models lemma context horn next identify subtle property canonical models horn tboxes need later roughly speaking states form locality due fact existential restrictions side cis horn unqualified assume canonical model concept expression tbox assume know exists homomorphism mapping either maps elements maps whole tree interested latter case following lemma states tbox basic concept thus question whether depends concept names aic roles horn tbox might sufficient take single basic concept least set basic concepts suffices corresponding fact horn admits conjunctions side cis observation hold tboxes ultimately reason fact one polynomially learn tboxes following example illustrates observation example consider tbox let since therefore find homomorphism mapping homomorphism maps single node basic concept clearly observation sketched hold present result formal way let trees labelling functions respectively call subtree following conditions hold restriction successor successor well nic set concept names aic basic concepts exists lemma let horn tbox assume image subtree included exists nic moreover tbox exists set nic single concept exact learning lightweight description logic ontologies proof sketch property canonical models horn proved implicitly many papers example artale give sketch let conjunction nic assume consider canonical model definition coincide observe canonical model concept expression tbox obtained hooking interpretations root every horn concept expressions side cis basic concepts interpretations depend thus interpretations hooked coincide homomorphism given lemma provides homomorphism lemma derived contradiction one requires single member nic since side cis consists single basic concept close introduction description logics comments choice languages literature uncommon consider weaker variant basic concepts admitted side cis compound concepts often without loss generality since every tbox expressed using additional role names way standard reasoning tasks subsumption conjunctive query answering reduced polynomial time corresponding tasks reduction possible framework exact learning concerned paper fact contrast tboxes tboxes trivially polynomial time learnable using either membership queries equivalence queries polynomially many cis ris given signature exact learning introduce relevant notation exact learning learning framework triple set examples also called domain instance space set mapping say positive example negative example give formal definition polynomial query time learnability within learning framework let learning framework interested exact identification target concept representation posing queries oracles let memf oracle takes input returns yes otherwise membership query call oracle memf similarly every denote eqf oracle takes input hypothesis concept representation returns yes counterexample otherwise denotes symmetric set difference assumption regarding counterexample chosen oracle equivalence query call oracle eqf learning algorithm deterministic algorithm takes input allowed make queries memf eqf without knowing target learned similarity name concept expression accidental taken mean two notions closely related standard terminology respective area boris konev carsten lutz ana ozaki frank wolter eventually halts outputs say exact learnable learning algorithm polynomial query learnable exact learnable algorithm every step sum sizes inputs membership equivalence queries made step bounded polynomial target largest counterexample seen far arias finally polynomial time learnable exact learnable algorithm every step count call oracle one step computation computation time used step bounded polynomial target largest counterexample seen far clearly learning framework polynomial time learnable also polynomial query learnable aim paper study learnability description logic tboxes context gives rise learning framework follows set tboxes formulated set cis ris formulated every observe iff tboxes say tboxes polynomial query learnable learning framework defined polynomial query learnable likewise polynomial time learnability show directly representation assumption signature target tbox known learner note standard assumption example learning propositional horn formulas common assume variables target formula known learner learning tboxes prove tboxes polynomial query learnable inverse roles disallowed cis ris target tbox algorithm runs polynomial time thus shows tboxes restricted language polynomial time learnable without restriction polynomial time learnability remains open simplify presentation make two minor assumptions target tbox show later assumptions overcomed first assume entail role equivalences exist distinct roles allows avoid dealing classes equivalent roles simplifying notation second requirement bit subtle concept inclusion reduced form basic concepts side concept name tbox named form cis reduced form contains concept name role assume target tbox named form cis considered learner reduced form particular counterexamples returned oracle immediately converted form example although tbox example entail role equivalences cis reduced form named form fix introduce concept names asupervisor aconduct research aadvisor extend following equivalences asupervisor aconduct research exact learning lightweight description logic ontologies algorithm learning algorithm input tbox named form given oracle given learner output tbox computed learner compute hbasic basic set hadd hbasic hadd let returned positive counterexample relative hbasic hadd hadd replace hadd else add hadd end end return hbasic hadd aadvisor notice graduate acts name new definition needed role degree tbox named form develop learning algorithm instructive start version always terminate refined obtain desired algorithm version presented algorithm given signature target tbox learner starts computing set hbasic posing oracle membership query basic concept observe hbasic enters main loop note condition hbasic hadd line implemented using equivalence query oracle line refers counterexample returned oracle case equivalence hold counterexample must positive since maintain invariant hbasic hadd throughout run algorithm form hadd added hadd otherwise lines algorithm terminates hbasic hadd implying target tbox learned example tbox example algorithm first computes hbasic coincides except prof included since concept basic main loop counterexamples hbasic hadd logical equivalence modulo hbasic cis prof prof oracle returns first first iteration algorithm terminates immediately learned otherwise oracle first returns second returns first second iteration algorithm terminates hadd prof boris konev carsten lutz ana ozaki frank wolter equivalent consider five examples algorithm fails terminate polynomially many steps example motivating different modification step added algorithm lines final corrected algorithm given algorithm modification step takes input counterexample equivalence hbasic hadd modifies posing membership queries oracle obtain still counterexample additional desired properties cis satisfying five additional properties called five modification steps three different types two saturations steps underlying tree left unchanged labelling modified adding concept names node labels replacing roles edge labels two merging steps nodes tree merged resulting tree fewer nodes decomposition step replaced subtree subtree removed concept name side might replaced saturation merging steps change side result logically stronger sense contrast decomposition step regarded reset operation also side change logically related start example motivates first saturation step example let tnf tnf ensures named form first algorithm computes hbasic afterwards oracle provide equivalence query loop positive counterexample set inductively thus algorithm terminate informally problem learner example concepts used counterexamples get larger larger still none counterexamples implies address problem saturating implied concept names following discussion recall distinguish concept expression tree representation example say obtained adding concept name label node stands concept expression corresponding tree obtained adding label definition concept saturation let obtained concept saturation obtained adding concept name label node say concept saturated obtained concept saturation exact learning lightweight description logic ontologies observe learner compute concept saturated counterexample posing polynomially many membership queries oracle simply asks node concept name whether obtained adding label answer positive replaces proceeds example example continued cis concept saturated example concept saturation observe returned oracle first equivalence query transformed learner concept saturated line tbox tnf learned one step possible counterexamples returned oracle equivalence query hbasic form concepts concept saturation results concept form following example motivates second saturation step subsequent examples transform tboxes named form effect argument simplifies presentation example consider tboxes set first equivalence queries loop oracle provide positive counterexample always choosing fresh set intuitively problem learner example exponentially many logically incomparable cis entailed entail step towards resolving problem replace roles roles counterexamples definition role saturation let obtained role saturation obtained replacing edge label role role say role saturated obtained role saturation similarly concept saturation learner compute role saturated counterexample posing polynomially many membership queries observe example role saturation thus counterexample returned first equivalence query transformed role saturated algorithm terminates one step introduce motivate two merging rules example consider tbox boris konev carsten lutz ana ozaki frank wolter figure tree representation homomorphism fix set figure left illustrates concept expression assuming lemma since homomorphism tcm labelled tree corresponds shown figure thus oracle provide first equivalence queries positive counterexample always choosing fresh set problem learner example similar example exponentially many logically incomparable cis entailed entail step towards solving problem merge predecessor successor nodes node edge labels inverse resulting still implied tbox definition merging concept obtained concept merging obtained choosing nodes role removing setting making every role let obtained merging obtained merging say merged obtained merging note obtained merging definition show one use lemma natural homomorphism identity except similarly saturation operations learner compute merged posing polynomially many membership queries example merging illustrated figure first step nodes merged two additional merging steps give following example motivates second merging operation example define concept expressions induction follows exact learning lightweight description logic ontologies figure merging figure tree representation concept expressions let also let set figure illustrates concept expressions form answer first equivalence queries oracle compute positive counterexample always choosing fresh set deal example introduce modification step identifies siblings rather parent child definition sibling merging concept obtained concept sibling merging obtained choosing nodes role removing setting making every role let obtained sibling merging obtained sibling merging say sibling merged obtained sibling merging verified obtained sibling merging example counterexamples cnm actually sibling merged thus producing sibling merged directly counterexamples returned oracle overcome problem illustrated example instead apply sibling boris konev carsten lutz ana ozaki frank wolter merging line algorithm instead adding hadd learner computes sibling merged adds hadd example illustrated figure clearly counterexamples learner added required figure sibling merging finally need decomposition rule following variant example illustrates four modification steps introduced far yet lead polynomial learning algorithm even applied line line algorithm example let oracle provide equivalence query positive counterexample inductively algorithm terminate even four modification steps introduced applied lines cis concept role saturated sibling merged problem illustrated example far learning algorithm attempts learn without ever considering add hadd whose side rather deal problem introduce reset step contrast previous modification steps lead different side also imply original given previous modification steps definition decomposed let say decomposed every node every concept name every corresponds subtree rooted contrast previous four modification steps membership queries used learner obtain decomposed depend also hypothesis hadd hbasic computed point starting learner takes node concept name checks using membership query whether subtree rooted check succeeds replaced hbasic hadd otherwise obtained removing subtree rooted exact learning lightweight description logic ontologies figure illustration decomposition cis algorithm learning algorithm input tbox named form given oracle given learner output tbox computed learner compute hbasic basic set hadd hbasic hadd let returned positive counterexample relative hbasic hadd find hbasic hadd hadd find replace hadd else add hadd end end return hbasic hadd note thus one cis entailed hbasic hadd replaces original example assume oracle returns first counterexample tree corresponding shown side figure decomposed label node contains concept rooted since hadd case applies replaced finishes description modification steps turns cure problems initial version algorithm enable polynomial query learnability definition concept saturated role saturated merged sibling merged decomposed lines algorithm need make currently considered exhaustively applying modification steps described possible orders resulting refined version learning algorithm shown algorithm next analyse properties algorithm boris konev carsten lutz ana ozaki frank wolter polynomial query bound algorithm algorithm terminates obviously found tbox hbasic hadd logically equivalent thus remains show algorithm terminates polynomially many polynomial size queries observe hadd contains one concept name step loop either added hadd side existed hadd line existing hadd replaced fresh start showing lines implemented polynomially many membership queries next lemma addresses line lemma given positive counterexample relative hbasic one construct counterexample using polynomially many polynomial size membership queries proof let positive counterexample relative hbasic hadd assume five modification steps introduced applied exhaustively posing membership queries oracle observe number applications modifications steps bounded polynomially show let number nodes obtained concept role saturation step obtained merging decomposition step thus number applications merging decomposition steps bounded number applications concept role saturated steps bounded respectively thus steps modification step applicable final verify also positive counterexample relative hbasic hadd suffices show resulting single modification step entailed hbasic hadd former shown introduced modification steps regarding latter first four modification steps hbasic replaced hence hbasic hadd decomposition step already argued definition added entailed hbasic hadd following lemma addresses line lemma assume one construct using polynomially many polynomial size membership queries proof start using fact one show concept saturated role saturated iii merged decomposed assume example concept saturated one add new concept name label node resulting concept clearly node assume without loss generality let concept obtained adding since contradicts assumption concept saturated remaining three modification steps considered similarly exhaustively apply modification exact learning lightweight description logic ontologies step sibling merging use resulting desired similarly argument one show properties still properties applying sibling merging thus argued proof lemma already number applications sibling merging step form bounded number nodes thus number modification steps bounded polynomially analyse algorithm first prove polynomial upper bound size cis end require notion isomorphic embedding auxiliary lemma homomorphism isomorphic embedding injective concept names holds following lemma shows cis interpolates meaning homomorphism witnesses see lemma isomorphic embedding lemma assume homomorphism maps isomorphic embedding proof assume first injective sibling merging homomorphism natural homomorphism lemma thus derived contradiction assumption sibling merged let following labelled tree nodes iff aid two nodes successor unique role sid let concept expression corresponds lemma thus obtained concept role saturation steps concept role saturated already isomorphic embedding able prove cis polynomial size let denote number nodes tree representation let hbasic lemma proof assume let canonical model lemma homomorphism mapping lemma isomorphic embedding using decomposed show maps restriction lemma follows since injective proof contradiction assume exists may assume path mapped boris konev carsten lutz ana ozaki frank wolter particular parent mapped let observe whole subtree rooted must mapped since otherwise would injective let corresponds subtree rooted lemma exists basic concept named form exists concept name thus isomorphic embedding make case distinction decomposed since contains edge node label derived contradiction construction exists basic concept copy interpretation attached construction since construction fresh attached already satisfied derived contradiction position prove learning algorithm terminates posing polynomial number queries lemma every concept name number replacements hadd form bounded polynomially proof cis ever added hadd show replaced number nodes tree representation strictly larger number nodes tree representation lemma number replacements thus bounded polynomial note replaced thus suffices establish following claim obtained removing least one subtree prove claim since lemma homomorphism canonical model maps also homomorphism canonical model thus lemma isomorphic embedding trivially also isomorphic embedding means obtained removing subtrees since least one subtree must fact removed obtained following main result section theorem tboxes polynomial query learnable using membership equivalence queries moreover tboxes without inverse roles learned polynomial time using membership equivalence queries exact learning lightweight description logic ontologies proof recall algorithm requires target tbox named form first show theorem assumption argue assumption dropped iteration algorithm either added hadd replaced hadd since number times former happens bounded lemma number times latter happens polynomial number iterations algorithm polynomial polynomial query learnability tboxes remains show iteration algorithm makes polynomially many polynomial size queries size largest counterexample seen far start equivalence queries made line already argued number iterations polynomial thus number equivalence queries made regarding size observe cis hbasic cis hadd size cis hbasic constant lemma size cis hadd polynomial membership queries made lines suffices invoke lemmas moreover part theorem observe since membership equivalence query counts one step computation potentially costly step algorithm implementation decomposition step line relies making subsumption checks form hbasic hadd discussed section deciding subsumption role inclusions subsumption ptime without inverse roles fragment role inclusions obtain polynomial time learnability tboxes case drop requirement target tbox named form show polynomial query time learning algorithm tboxes named form transformed kind algorithm unrestricted target tboxes fact learner use membership queries entail compute every role class roles choose representative class whenever used counterexample returned oracle gets replaced likewise whenever name algorithm still uses concept name internal representations although longer included signature target tbox replaces counterexamples returned oracle also replaces membership queries oracle hypothesis used posing equivalence queries learning horn tboxes study exact learnability tboxes horn extension admits conjunctions basic concepts side cis language generalisation propositional horn logic fact algorithm present combines classical algorithms propositional horn logic angluin frazier pitt algorithm presented section resulting algorithm quite subtle indeed reason treated case separately section simplify presentation make assumptions section target tbox signature particular assume named form boris konev carsten lutz ana ozaki frank wolter algorithm learning algorithm horn tboxes input horn tbox named form given oracle given learner output tbox computed learner compute hbasic basic set hadd empty list hbasic hadd let returned positive counterexample find rhs left saturate lhs lhs lhs else end set hbasic hadd end return suitably generalised horn distinct roles role tbox contains equivalence cis either cis basic concepts contain concept expressions form side role denote lhs set concept names occur conjuncts side denote rhs set concept expressions occur conjuncts side nested inside restrictions often distinguish set lhs conjunction concept expressions similarly rhs example lhs rhs stands also lhs lhs lhs lhs stands algorithm learning horn tboxes shown algorithm like algorithm algorithm first determines set hbasic contains cis basic concepts ris hypothesis union hbasic hadd contrast algorithm hadd ordered list cis rather set write denote position list hadd learning algorithm working ordered list cis allows learner pick first hadd certain property merge new technique adopt angluin frazier pitt algorithm loop invariant thus necessarily positive algorithm terminates algorithm uses membership queries compute counterexample rhs contains one concept expression form line left saturated line left saturated side contains subsuming concept names definition satisfies conditions cis section appropriately modified cis conjunctions concept names side definition algorithm checks whether concept name lhs positive exact learning lightweight description logic ontologies algorithm function hadd lhs lhs lhs lhs concept saturate lhs lhs replace first hadd else concept saturate append list hadd end return algorithm function rhs form hadd lhs lhs lhs lhs lhs rhs find lhs lhs rhs replace first hadd lhs lhs else concept saturate lhs lhs replace first hadd end else append list hadd end return counterexample calls function algorithm updates hypothesis either refining hadd appending new hadd number replacements given hadd bounded since whenever replaced lhs lhs lhs concept name lhs positive counterexample algorithm calls function algorithm case one considers existential restrictions occur side note viewed variation body loop algorithm one considers sets concept names side cis rather single concept name recall algorithm new hadd merged concept name side contrast merged intersection still subsumed existential restriction rhs lines two cases intersection also subsumed rhs checked line next line counterexample computed first replaced new otherwise follows lhs lhs lhs consequence fact hadd contains concept saturated cis defined essentially previous section see definition lhs lhs lhs lhs lhs implies rhs concept saturatedness thus contradicting lhs lhs boris konev carsten lutz ana ozaki frank wolter first replaced computed line note latter happen times hadd former happen times hadd see lemma refined appends hadd define step used line algorithm observe step meaningless definition left saturation obtained left saturation rhs rhs lhs lhs left saturated coincides left saturation one clearly left saturate checking whether lhs every following example shows line necessary algorithm polynomial similar step also necessary frazier algorithm learning propositional horn logic entailments frazier pitt example assume line algorithm omitted let set oracle provide first equivalence queries loop algorithm positive counterexample always choosing fresh set refinements side extend horn notion essential cis introduced previous section concept saturation obtained concept saturation lhs lhs rhs obtained rhs adding concept name label node rhs concept saturated obtained concept saturation role saturation obtained role saturation lhs lhs rhs obtained rhs replacing edge label role role role saturated obtained role saturation merged obtained merging lhs lhs rhs obtained rhs merging definition merged obtained merging sibling merged obtained sibling merging lhs lhs rhs obtained rhs sibling merging definition sibling merged obtained sibling merging exact learning lightweight description logic ontologies decomposed decomposed every nonroot node rhs every role every rhs corresponds subtree rhs rooted definition horn concept saturated role saturated merged sibling merged decomposed saturation merging decomposition steps defined straightforward generalisations definitions cis conjunctions side one easily generalise arguments show one compute using polynomially many membership queries entailment checks relative analysis learning algorithm crucial cis ordered list hadd times prove next lemma point execution algorithm cis hadd proof form concept name set rhs concept saturation contains concept names thus follows cis added hadd lines essential also easy see rhs concept saturation lhs well thus line polynomial query bound algorithm previous section immediate upon termination algorithm found tbox hbasic hadd logically equivalent target tbox thus remains show issues polynomially many queries polynomial size first discuss lines algorithm line implemented next lemma addresses lines algorithm lemma given positive counterexample relative one construct polynomially many polynomial size membership queries counterexample left saturated rhs proof lemma straightforward extension proof lemma uses observation computed adding concept names lhs lhs lemma also requires rhs rhs lhs simply drop conjuncts form rhs otherwise satisfy condition simply choosing conjunct rhs lhs apply concept saturation lhs resulting left saturated one conjunct form rhs following lemma addresses line lemma assume rhs lhs lhs rhs one construct polynomially many polynomial boris konev carsten lutz ana ozaki frank wolter size membership queries lhs lhs rhs proof assume lhs lhs rhs similar lemma one show property cis fail sibling merged applying step sibling merging lhs lhs rhs resulting required also show number cis hadd bounded polynomially position hadd number replacements bounded polynomially properties follow following lemma lemma let hadd ordered list cis computed point execution algorithm length hadd bounded number cis number replacements existing hadd bounded polynomially rest section devoted proving lemma first show point lemma start generalising lemma size cis conjunction concept names set basic concept recall concept expression denote number nodes tree corresponding lemma nrhs proof thedproof almost proof lemma let let canonical model prove almost way proof lemma trhs mapping injective lemma horn instead assume one homomorphism mapping using position prove point lemma lemma number replacements existing hadd bounded polynomially exact learning lightweight description logic ontologies proof hadd replaced line lines replaced line line lhs lhs number replacements bounded replaced line either lhs lhs lhs lhs latter case one show proof lemma following claim obtained removing subtrees thus time hadd replaced line without decreasing number concept names lhs number nrhs nodes tree representation rhs strictly increases lemma nrhs bounded polynomially lemma follows come proof point lemma formulate upper bound length hadd terms convenient assume side every primitive either concept name concept expression form assumption since one equivalently transform every two cis call tbox note cis may still multiple concepts side concept called concept saturated whenever results adding new concept name label node denote sat unique concept obtained adding concept names node labels concept saturated following definition enables link cis hadd cis definition let say target lhs lhs exists rhs lhs rhs sat aim show algorithm maintains invariant iii every hadd target every target one hadd point lemma clearly follows example illustrate definition suppose target tbox simplify notation use denote occurring assume hbasic hadd let target however applying left saturation obtain since lhs lhs lhs target target making results boris konev carsten lutz ana ozaki frank wolter target finally let lhs lhs target note result making also target point iii consequence following lemma lemma let let target proof assume assume proof contradiction target first show following claim target lhs lhs proof claim consider canonical model ilhs lhs recall denotes root ilhs lemma ilhs iff lhs concept thus suffices prove ailhs implies lhs concept names proof induction sequence used construct ilhs ilhs ilhs case definition suppose claim holds either exist concept names exists first case lhs induction hypothesis lhs otherwise would target second case must case similar omitted follows lhs otherwise would target since induction hypothesis lhs sat finishes proof claim claim conjunct thed form rhs let lhs canonical model lhs rhs one show way proof lemma injective homomorphism labelled tree corresponding restriction mapping root root thus definition lhs lhs lhs rhs sat lemma ailhs lhs hence claim lemma lhs lhs shown target derived contradiction point iii direct consequence lemma fact cis hadd lemma prove point first establish following intermediate lemmas lemma let let inputs let hadd concept name lhs satisfy following conditions lhs lhs lhs lhs lhs replaced line exact learning lightweight description logic ontologies proof assume satisfy conditions lemma replaces done suppose happen need show replaced conditions lhs lhs left saturated lhs implies lhs lhs lhs condition lines satisfied replaced lemma let let inputs target hadd satisfies lhs lhs replaced line proof let satisfy conditions lemma replaces done suppose happen need show replaced first show concept form rhs lhs target note algorithm calls concept name lhs lhs line left saturated implies rhs lhs rhs lhs rhs sat compound target follows lhs target form rhs line algorithm one conjunct rhs rhs sat obtain lhs since lhs lhs lhs lhs lhs positive counterexample rhs lhs thus obtain lhs lhs lhs hence condition lines satisfied replaced line point direct consequence following lemma lemma point execution algorithm hadd target lhs lhs proof proof induction number iterations lemma vacuously true assume holds algorithm modifies hadd response receiving positive counterexample iteration make case distinction case algorithm calls let inputs assume first condition lines satisfied appends result concept saturating hadd call suppose lemma fails hold happen target hadd lhs lhs since lhs lhs lhs lhs since rhs concept name lhs rhs sat lhs lemma applies contradicts assumption replace hadd assume condition lines satisfied suppose lemma fails hold happen hadd either replaced lhs lhs target replaced target lhs lhs case lhs lhs lhs obtain boris konev carsten lutz ana ozaki frank wolter lhs lhs lhs thus lhs lhs contradicts induction hypothesis assume case since target obtain lhs lhs rhs rhs lhs rhs sat since lhs lhs lhs follows point lhs lhs lhs lhs rhs sat obtain lhs rhs lhs either rhs lhs lhs either target lhs target would contradict induction hypothesis thus lhs conditions lemma satisfied thus replaced contradicts assumption replaced case algorithm calls let inputs assume first condition lines satisfied appends hadd suppose lemma fails hold happen target hadd lhs lhs lemma contradicts assumption replace hadd assume condition lines satisfied suppose lemma fails hold happen hadd either replaced lhs lhs target replaced target lhs lhs case argue lhs lhs obtain lhs lhs lhs thus lhs lhs contradicts induction hypothesis assume case target obtain following lhs lhs rhs lhs rhs sat since lhs lhs lhs follows point lhs lhs lhs lhs recall algorithm calls lhs lhs rhs lhs left saturation assume since rhs lhs point follows lhs lhs lhs rhs rhs lhs means target contradicts induction hypothesis otherwise form either rhs rhs latter case rhs lhs rhs sat target contradicts induction hypothesis former case target satisfy conditions lemma thus replaced contradicts assumption replaced proved main result section theorem horn tboxes polynomial query learnable using membership equivalence queries moreover horn tboxes without inverse roles learned polynomial time using membership equivalence queries exact learning lightweight description logic ontologies proof polynomial query learnability horn tboxes follows lemma analysis number membership queries lemmas see proof theorem second part observe potentially costly steps entailment checks form horn tbox horn without inverse roles role inclusions entailment known ptime baader learning ellhs tboxes study polynomial learnability tboxes restriction ellhs concept names allowed side cis assume cis used membership queries equivalence queries returned counterexamples also restricted form show assumption ellhs tboxes learned polynomial time previous section learning algorithm extension polynomial time algorithm learning propositional horn theories presented angluin arias certain similarity learning algorithm section horn learning algorithm introduced section cases side inclusions contain complex concept expressions unless addressed might lead several counterexamples unnecessarily strong sides targeting inclusion target tbox algorithm storing multiple counterexamples hadd prevented taking intersection set conjuncts sides deal complex sides inclusions ellhs sophisticated way taking intersection concept expressions required define identify concept expressions interpretations take product products also employed construction least common subsumers baader detail say interpretation ditree interpretation directed graph directed tree distinct denote root ditree interpretation interpretation corresponding concept expression ditree interpretation root conversely every ditree interpretation viewed concept expression way labelled tree edge labels role names rather arbitrary roles seen concept expression interpretation given ellhs tbox notice ellhs inclusion interpretation indeed construction aic conversely given learning algorithm construct polynomial time inclusions form concept name posing membership queries oracle thus learning algorithm use inclusions interchangeably prefer working interpretations use notion products define intersection concept expressions results section linking homomorphisms entailment direct way boris konev carsten lutz ana ozaki frank wolter figure illustration example product two interpretations interpretation products preserve membership concept expressions lutz lemma interpretations concept expressions following holds one easily show product ditree interpretations disjoint union ditree interpretations ditree interpretations denote maximal ditree interpretation subinterpretation contains example figure depicts product ditree interpretations root root ditree interpretation root contain nodes observe product concept expressions concept names coincides interpretation conjunction concept names thus products seen generalisation taking intersection concept names side horn concept inclusions used section describe class sense minimal central learning algorithm let ellhs tbox learned assume signature known learner ditree interpretation use denote interpretation obtained removing root use denote subtree rooted removed essential following conditions satisfied intuitively condition states contradicts root reason satisfy least one condition minimality condition states exact learning lightweight description logic ontologies algorithm learning algorithm ellhs tboxes input ellhs tbox given oracle given learner output tbox computed learner set empty list ditree interpretations set let returned positive counterexample relative find essential let first element find essential replace else append end construct concept name end return longer remove node example end section shows working essential needed learning algorithm polynomial time algorithm learning ellhs tboxes given algorithm maintains ordered list ditree interpretations intuitively represents tbox constructed line line write homomorphism ditree interpretation ditree interpretation mapping denotes homomorphism exists lemma iff checked polynomial time size line write shorthand condition subinterpretation obtained removing subtrees note assumption line positive counterexample returned justified construction lines ensures times provide additional details realise lines line easiest simply use membership queries find cis entailed later show length bounded polynomially interpretation replaced polynomially many times therefore polynomially many membership queries suffice lines addressed lemmas lemma given positive counterexample relative one construct essential using polynomially many membership queries boris konev carsten lutz ana ozaki frank wolter proof let positive counterexample relative let ditree interpretation first observe since know occur conjunct consequently aic thus construct essential applying following rules saturate exhaustively applying cis rules add replace minimal subtree refuting address condition essential describe achieved using membership queries denote ditree interpretation obtained taking subtree rooted check using membership queries concept name whether replace exists exist exist replaced exhaustively remove subtrees condition essential also satisfied replace achieved using membership queries concept name show interpretation constructed required properties first observe clearly interpretation constructed step model taking subtrees removing subtrees preserves model conclude next show interpretation constructed step model fact use positive counterexample relative instead observe thus implies hand implies concept names consequently since thus construction steps preserve condition model remains argue satisfies conditions essential condition ensured step condition ensured step respectively lemma given essential one construct essential using polynomially many membership queries proof let essential obtain interpretation exhaustively applying rule proof lemma argued applying rule implemented using membership queries thus remains argue satisfies conditions countermodels condition show know thus lemma obtained removing subtrees removing subtrees preserves model exact learning lightweight description logic ontologies ellhs tbox thus condition show case subtree rooted would removed construction using rule algorithm terminates obviously returns tbox equivalent target tbox thus remains prove algorithm terminates polynomially many steps consequence following lemma lemma let list computed point execution algorithm length bounded number cis interpretation position replaced often new interpretation rest section devoted proving lemma easy reference assume point execution algorithm form establish point lemma closely follow angluin show iii every identical fact point iii immediate since whenever new added algorithm prove point first establish intermediate lemma ditree interpretation write satisfied root necessarily points easy see interpretation following lemma shows conditions algorithm replaces interpretation list lemma interpretation constructed line algorithm satisfies replaced line proof assume interpretation constructed line algorithm satisfies replaced line done thus assume aim show replaced line end suffices prove latter consequence assume contrary show establish contradiction holds construction algorithm showing cij cij boris konev carsten lutz ana ozaki frank wolter point cij imply cij gives cij lemma remains observe implies view construction algorithm point established showing cij since suffices prove cij however immediate consequence fact definition cij point consequence following lemma time algorithm execution following condition holds proof prove invariant formulated lemma induction number iterations loop clearly invariant satisfied loop entered consider two places modified line line starting latter line appended assume show added however immediate lemma assume replaced line show two properties assume contrary since obtained removing subtrees see lemma implies consequently former yields lemma contradiction latter case since replacement contradiction induction hypothesis assume contrary since obtained removing subtrees thus since replacement contradiction induction hypothesis turn towards proving point lemma consequence lemma lemma essential proof let essential follows lemma homomorphism mapping show follows suffices show surjective assume case let outside range homomorphism therefore lemma implies contradicts assumption essential violates condition essential exact learning lightweight description logic ontologies lemma let list interpretations replaces line proof let ditree interpretations set implies concept names first show every either via surjective homomorphism proof contradiction assume neither holds since obtained removing subtrees obtain since essential let subinterpretation determined range homomorphism mapping lemma since hold essential surjective derived contradiction addition property stated also follows either concept name lemma hence subsequence follows thus established main result section note obtain polynomial time learning algorithm since checking polynomial times tboxes cis discussed section theorem ellhs tboxes polynomial time learnable using membership equivalence queries following example shows algorithm terminate polynomial time line transform given counterexample essential example assume line algorithm modify counterexample given line second condition essential satisfied first condition hold target tbox oracle return infinite sequence positive counterexamples prime number fact algorithm would simply construct list interpretations prime number would terminate show observe algorithm would never replace list another since boris konev carsten lutz ana ozaki frank wolter assume line algorithm modify counterexample given line first condition essential satisfied second condition hold let tbox containing cis containing concept names say simplicity let oracle return positive counterexamples tree tci corresponding result identifying node tree corresponding root tree corresponding note product icn interpretation elements iteration algorithm computes interpretation exponential size limits polynomial learnability main result section tboxes polynomial query learnable using membership equivalence queries also show tboxes polynomial query learnable using membership equivalence queries alone latter result also holds ellhs tboxes case however follows already fact propositional horn logic polynomial query learnable entailments using membership equivalence queries alone frazier pitt angluin angluin start proving query learnability result tboxes way also prove query learnability tboxes using membership queries proof shows even acyclic tboxes polynomial query learnable fact heavily relies additional properties acyclic tboxes recall tbox called acyclic satisfies following conditions baader konev cis ces form concept name concept name occurs side cyclic definitions sequence cis concept name side occurs concept name side occurs side query learnability proof inspired angluin lower bound following abstract learning problem angluin learner aims identify one distinct sets property exists set assumed valid argument equivalence query learner pose membership queries equivalence queries worst case takes least membership equivalence queries exactly identify hypothesis proof proceeds follows every stage computation oracle viewed adversary maintains set hypotheses learner able distinguish based answers given far initially learner asks membership query oracle returns yes otherwise latter case unique removed learner asks equivalence query exact learning lightweight description logic ontologies oracle returns counterexample symmetric difference always exists valid query counterexample member one eliminated worst case learner reduce cardinality one exactly identify hypothesis takes queries similarly method outlined proof maintain set acyclic tboxes whose members learning algorithm able distinguish based answers obtained far didactic purposes first present set acyclic tboxes superpolynomial size every tbox oracle respond membership queries way described polynomial time learnable equivalence queries also allowed show tboxes modified obtain family acyclic tboxes polynomial query learnable using membership equivalence queries present tboxes fix two role names use following abbreviation sequence expression stands every sequence many consider acyclic tbox defined observe canonical model consists full binary tree whose edges labelled role names root level canonical model root labelled addition binary tree path given sequence whose endpoint marked concept name one use angluin strategy show tboxes set tboxes learned using polynomially many polynomial size membership queries notice sequence length thus membership query form eliminates one tbox set tboxes learner distinguish observation generalised arbitrary membership queries however instead observe tboxes formulated prove stronger result proof given appendix uses canonical model construction introduced section lemma every signature either every one argument outlined immediately gives following side result theorem tboxes even without inverse roles polynomial query learnable using membership queries boris konev carsten lutz ana ozaki frank wolter return proof tboxes polynomial query learnable using membership equivalence queries notice set tboxes suitable single equivalence query sufficient learn tbox two steps given equivalence query oracle option reveal target tbox found inside every counterexample strategy rule equivalence queries intersection tbox modify way although tbox axiomatising intersection set consequences exists size superpolynomial used equivalence query polynomial query learning algorithm every every every role sequence length define acyclic tbox union following cis observe every contains tboxes discussed replaced three concept names addition every entails among cis every either different cis indicates every representation intersection tbox requires superpolynomially many axioms follows lemma indeed case let set every sequence length exists one one choose different tuples notice size polynomial superpolynomial size let set tboxes learner distinguish initially use denote signature proof query learnability show oracle strategy answer membership equivalence queries without eliminating many tboxes start former unlike case presented membership query eliminate one tbox consider example two tboxes entailed prove however number tboxes eliminated single membership query linearly bounded size query fact prove query learnability suffices consider place concept equivalence however cis form allowed acyclic tboxes cis complex side concept equivalences essential query learnability acyclic tbox containing expressions form tbox thus polynomially learnable membership equivalence queries section exact learning lightweight description logic ontologies lemma cis either every number exceed proof lemma technical deferred appendix illustrate proof method consider particular case deals membership queries form used proof general case proofs rely following lemma konev characterises cis entailed acyclic tboxes lemma konev let andacyclic tbox role name concept expression suppose concept names concept expressions following holds concept name contain concept expression exists form either exists exists following lemma considers membership queries form lemma sequence role names concept expression either every one proof lemma follows following claim claim let either exists form concept expression proof claim prove claim induction lemma concept expression form concept name concept expression hold concept name distinct obtain thus point follows let lemma one following two cases form concept name concept expression exists point follows form concept expressions notice length sequence strictly less thus induction hypothesis point follows boris konev carsten lutz ana ozaki frank wolter finishes proof claim see claim entails lemma observe one satisfy point point entails every show oracle answer equivalence queries aiming show polynomial size equivalence query oracle return counterexample either one every thus counterexample eliminates one set tboxes learner distinguish addition however take extra care size counterexamples learning algorithm allowed formulate queries polynomial size target tbox also size counterexamples returned oracle instance hypothesis tbox contains entailed one simply return counterexample since learner able pump capacity asking sequence equivalence queries size twice size every stage run learning algorithm query size polynomial size input size largest counterexample received far exponential size queries become available learner following lemma addresses issue lemma tbox exists size exceed one every proof define exponentially large tbox use prove one select required way either vice versa define denote sequence conjunction define consider following cases suppose exists clearly entailed every size exceed thus required suppose exist concept expression form seen lemma appendix exists sequence role names thus show inclusion required clearly size exceed remains prove one exact learning lightweight description logic ontologies suppose exists lemma exists respectively easy see follow follows one finally suppose neither case apply every every concept expression form show unless exists satisfying conditions lemma contains least different cis thus derive contradiction fix obtain must exist least one let different concept names suppose exists notice contains concepts names except thus size exceed required assume contains conjunct size exceed required assume contains conjunct size exceed required none applies contains exactly exactly argument applies arbitrary thus exists satisfying conditions lemma final case contains least cis ingredients prove tboxes polynomial query learnable using membership equivalence queries theorem tboxes polynomial query learnable using membership equivalence queries proof assume tboxes polynomial query learnable exists learning algorithm whose query complexity sum sizes inputs membership equivalence queries made algorithm computation step bounded stage polynomial choose let follow angluin strategy letting oracle remove tboxes boris konev carsten lutz ana ozaki frank wolter way learner distinguish remaining tboxes given membership query every answer yes otherwise answer removed lemma tboxes given equivalence query answer counterexample guaranteed lemma produced one removed counterexamples produced smaller overall query complexity algorithm bounded hence learner asks queries size every query exceed lemmas tboxes removed run algorithm algorithm distinguish remaining tboxes derived contradiction conclude section showing tboxes learned using polynomially many polynomial size equivalence queries use following result query learnability monotone dnf formulas dnf formulas use negation using equivalence queries due angluin equivalence queries take hypothesis form monotone dnf formula return counterexample either truth assignment satisfies target formula vice versa let denote set monotone dnf formulas whose variables exactly conjunctions conjunction contains exactly variables theorem angluin polynomial exist constants strategy oracle answer equivalence queries posed learning algorithm way sufficiently large learning algorithm asks equivalence queries bounded size exactly identify elements employ theorem associate every monotone dnf formula xisi xisi tbox follows conjunct xisi associate concept expression occurs xisi otherwise role names let concept name set existence strategy direct consequence theorem angluin states class dnf formulae approximate fingerprint property proof theorem angluin strategy explicitly constructed class approximate fingerprints exact learning lightweight description logic ontologies example say tbox obtained translation monotone variables following form truth assignment variables also corresponds concept expression makes true otherwise holds truth assignments note represents variable false variable true thus captures monotonicity dnf formulas considered fixed values set note tboxes exactly tboxes satisfy additionally conditions dnf represented exactly conjunctions conjunction exactly variables describe strategy oracle answer equivalence queries learning algorithm able exactly identify members based answers polynomially many equivalence queries polynomial size tbox equivalence query obviously within class explicitly produce counterexample oracle return hand tbox equivalence query similar tboxes approximate tbox return counterexample corresponding truth assignment oracle theorem would return given detail strategy follows assume given polynomial theorem strategy oracle chosen sufficiently large learning algorithm dnf formulas asks equivalence queries bounded size distinguish members choose sufficiently large let equivalence tbox query issued learning algorithm following entails return negative counterexample entails return negative counterexample boris konev carsten lutz ana ozaki frank wolter return negative counterexample exists return positive counterexample suppose none applies say sequence whenever obtain tbox dnf representation setting observe sequence convert corresponding monotone dnf formula reversing translation monotone dnf formulas tboxes form obvious way note size linear size given oracle returns positive negative counterexample truth assignment return counterexample form observe answers given points correct sense inclusion returned negative example point trivially correct since monotone dnf satisfied truth assignment makes every variable true analyse size tbox computed point lemma assume points apply number sequences bounded proof first show exists concept expressions conjunct proof require canonical model lemma denote root let canonical model construction assumption points hold either exists aia holds show first condition hold assume prove contradiction aia lemma contradicts assumption point apply follows number distinct sequences bounded number distinct sequences conjunct thus number distinct sequences bounded exact learning lightweight description logic ontologies follows lemma size tbox computed point bounded theorem tboxes even without inverse roles polynomial query learnable using equivalence queries proof suppose query complexity learning algorithm tboxes bounded every stage computation polynomial size target tbox maximal size counterexample returned oracle current stage computation let let constants guaranteed lemma claim sufficiently large distinguish assuming maximal size counterexamples given point largest counterexample returned strategy described form sufficiently large maximal size counterexample run bounded similarly size every potential target tbox exceed constant sufficiently large bounded thus sufficiently large total query complexity input bounded obviously size query bounded query complexity learning algorithm size dnf equivalence query forwarded strategy guaranteed lemma bounded queries forwarded return answers distinguished remains observe distinguish related work related work already discussed introduction paper discuss detail related work ontology learning general exact learning ontologies particular start former ontology learning research ontology learning rich history discuss full detail collection lehmann surveys cimiano wong provide excellent introduction state art field techniques applied ontology learning range information extraction text mining interactive learning inductive logic programming ilp particular relevance paper approaches learning logical expressions rather subsumption hierarchies concept names example work lehmann haase lehmann hitzler applies techniques ilp learn description logic concept expressions ilp applied well lisi learning logical rules ontologies learning fuzzy dls considered lisi straccia machine learning methods applied learn ontology axioms include association rule mining arm niepert fleischhacker formal concept analysis fca rudolph baader distel borchmann ganter recently learnability lightweight boris konev carsten lutz ana ozaki frank wolter tboxes finite sets interpretations investigated klarman britz exact learning description logic concept expressions rather aiming learn tbox one interested learning target concept expression first studied cohen hirsh frazier pitt standard learning protocol follows membership query asks whether concept expression subsumed target concept expression symbols equivalence query asks whether concept expression equivalent target concept expression symbols equivalent oracle gives counterexample concept expression either cohen hirsh frazier pitt consider concept expressions variations largely historic description logic classic borgida borgida expressive power classic variants incomparable expressive power modern lightweight description logics classic shares conjunction unqualified existential restrictions form dls considered paper additionally admits value restrictions whose interpretation given unqualified number restrictions interpreted well various constructors using individual names example names individual objects classic concept denoting set aii denotes individual name interpretation proved cohen hirsh frazier pitt many fragments classic concept expressions learned polynomially using membership equivalence queries learned polynomial time using exact learning concept expressions modern lightweight description logics yet investigated exact learning tboxes using concept inclusions queries first results exact learning description logic tboxes using concept inclusions queries presented konev paper extension contrast konev make distinction polynomial time polynomial query learnability enables formulate prove results fine grained level tboxes horn prove polynomial query learnability considered konev current paper also closely related phd thesis third author ozaki addition results presented shown even extension ellhs role inclusions tboxes learned polynomial time learning algorithm extension algorithm presented ellhs tboxes exact learning lightweight description logic ontologies exact learning tboxes using certain answers recent years data access mediated ontologies become one important applications dls see poggi bienvenu kontchakov zakharyaschev bienvenu ortiz references therein idea use tbox specify semantics background knowledge data use deriving complete answers queries data context data stored abox consisting finite set assertions form concept names role name individual names given query typically conjunctive query tbox abox tuple individual names length called certain answer symbols every model satisfies motivated setup konev ozaki study polynomial learnability tboxes using membership queries ask whether tuple individuals names certain answer query abox target tbox natural alternative learning using concept inclusions since domain experts often familiar querying data particular domain logical notion subsumption concept expressions detail learning protocol follows membership query takes form asks whether tuple individual names certain answer query abox target tbox equivalence query asks whether tbox equivalent target tbox equivalent counterexample form given positive counterexample negative counterexample learning protocol yet specified class queries drawn strongly influences classes tboxes learned context data access using tboxes two popular classes queries conjunctive queries cqs existentially quantified conjunctions atoms instance queries iqs take form concept expression consideration role name konev ozaki exact learning tboxes languages ellhs studied iqs cqs queries positive learnability results proved polynomial reductions learnability results presented paper ozaki basic link learning using concept inclusions queries learning certain answers follows tbox concept expressions dls discussed one regard labelled tree corresponding abox root holds converse direction obtaining concept expression abox involved since aboxes additional unfolding step needed compute corresponding concept expression using link proved konev ozaki ellhs tboxes role inclusions learned polynomially many queries using certain answers iqs also proved still learnable polynomially many queries using certain answers neither iqs cqs query language tboxes learned polynomially many queries using certain answers cqs query language boris konev carsten lutz ana ozaki frank wolter exact learning fragments horn discuss results exact learning finite sets horn clauses fragments logic horn clause universally quantified clause one positive literal page arimura reddy tadepalli arias khardon arias selman fern depending used membership queries counterexamples equivalence queries one distinguish exact learning horn clauses using interpretations using entailments learning using entailments closer approach focus setting exact learning protocol follows membership query asks whether horn clause entailed target set horn clauses equivalence query asks whether set horn clauses equivalent target set equivalent counterexample given horn clause entailed positive counterexample vice versa considering terms function symbols allowed appear horn clause two main restrictions studied literature range restricted clauses set terms positive literal existent subset terms negative literals subterms constrained clauses set terms subterms positive literal existent superset terms negative literals example horn clause range restricted constrained horn clause constrained range restricted predicate symbol function symbol reddy tadepalli arimura shown certain acyclicity conditions horn range restricted clauses respectively constrained clauses polynomial time learnable entailments arity predicates bounded constant learning algorithm fragment horn called closed horn subsumes two languages defined presented arias khardon algorithm polynomial number clauses terms predicates size counterexamples exponential arity predicates also number variables per clause fact open question whether exists learning algorithm closed horn polynomial number variables per clause relate learnability results horn learnability results lightweight description logics presented paper observe dls particular dls investigated paper translated baader example translation ellhs translation translation every ellhs tbox regarded set range restricted horn clauses arity predicates bounded contrast since existential quantifiers nested right side cis cis translated horn clauses summarise relationship learnability results exact learning lightweight description logic ontologies ellhs horn results exact learnability horn entailments follows since arity predicates since function symbols admitted dls none dls considered paper express fragments horn discussed hand impose acyclicity condition tboxes contrast reddy tadepalli arimura algorithms polynomial number variables permitted clause contrast arias khardon thus results discussed horn translate polynomial learning algorithms ellhs applicable horn results thus cover new fragments yet considered exact learning surprising given fact fragments considered previously motivated applications ontology learning also related exact learning horn recent work exact learning schema mappings data exchange ten cate schema mappings tuples source schema finite set predicates target schema finite set predicates finite set sentences form conjunctions atoms respectively fagin gav schema mapping empty atom ten cate authors study exact learnability gav schema mappings data examples consisting database source schema database target schema data example satisfies authors present polynomial query learnability results protocols using membership equivalence queries query learnability results either membership equivalence queries allowed results presented ten cate applicable setting considered paper since learning protocol uses data examples instead entailments conclusion presented first study learnability ontologies angluin framework exact learning obtaining positive negative results several research questions remain explored one immediate question whether acyclic tboxes learned polynomial time using queries counterexamples form note query learnability result acyclic tboxes relies heavily counterexamples form another immediate question whether extension ellhs inverse roles better approximation ellhs still learned polynomial time least polynomially many queries polynomial size interesting research directions time learning algorithms tboxes admission different types membership queries counterexamples learning protocol example one could replace cis counterexamples interpretations acknowledgements lutz supported dfg project konev wolter supported epsrc project ozaki supported science without borders scholarship programme boris konev carsten lutz ana ozaki frank wolter references dana angluin learning propositional horn sentences hints technical report yale university dana angluin queries concept learning machine learning dana angluin negative results equivalence queries machine learning dana angluin michael frazier leonard pitt learning conjunctions horn clauses machine learning marta arias exact learning expressions queries phd thesis tufts university marta arias construction learnability canonical horn formulas machine learning marta arias roni khardon learning closed horn expressions information computation marta arias roni khardon maloberti learning horn expressions journal machine learning research hiroki arimura learning acyclic horn sentences entailment international workshop algorithmic learning theory pages alessandro artale diego calvanese roman kontchakov michael zakharyaschev family relations journal artificial intelligence research jair franz baader ralf ralf molitor computing least common subsumers description logics existential restrictions international joint conference artificial intelligence ijcai pages franz baader diego calvanese deborah mcguinness daniele nardi peter editors description logic handbook theory implementation applications cambridge university press new york usa isbn franz baader sebastian brandt carsten lutz pushing envelope international joint conference artificial intelligence ijcai pages franz baader bernhard ganter baris sertkaya ulrike sattler completing description logic knowledge bases using formal concept analysis international joint conference artificial intelligence ijcai pages franz baader carsten lutz sebastian brandt pushing envelope proceedings fourth owled workshop owl experiences directions exact learning lightweight description logic ontologies franz baader ian horrocks carsten lutz ulrike sattler introduction description logic cambridge university press meghyn bienvenu magdalena ortiz query answering datatractable description logics reasoning web semantic technologies advanced query answering international summer school pages meghyn bienvenu balder ten cate carsten lutz frank wolter data access study disjunctive datalog csp mmsnp acm trans database daniel borchmann felix distel mining ieee international conference data mining workshops vancouver canada daniel borchmann learning terminological knowledge high confidence erroneous data phd thesis higher school economics alexander borgida peter semantics complete algorithm subsumption classic description logic journal artificial intelligence research alexander borgida ronald brachman deborah mcguinness lori alperin resnick classic structural data model objects proceedings acm sigmod international conference management data pages ronald brachman james schmolze overview knowledge representation system cognitive science lorenz daniel fleischhacker jens lehmann melo johanna inductive lexical learning class expressions knowledge engineering knowledge management international conference ekaw pages diego calvanese giuseppe giacomo domenico lembo maurizio lenzerini riccardo rosati tractable reasoning efficient query answering description logics family journal automated reasoning philipp cimiano johanna paul buitelaar ontology construction handbook natural language processing second pages chapman william cohen haym hirsh learnability description logics equality constraints machine learning william cohen haym hirsh learning classic description logic theoretical experimental results principles knowledge representation reasoning pages felix distel learning description logic knowledge bases data using methods formal concept analysis phd thesis dresden university technology boris konev carsten lutz ana ozaki frank wolter ronald fagin phokion kolaitis miller lucian popa data exchange semantics query answering theoretical computer science daniel fleischhacker johanna heiner stuckenschmidt mining rdf data property axioms move meaningful internet systems otm pages springer michael frazier leonard pitt learning entailment application propositional horn sentences international conference machine learning icml pages michael frazier leonard pitt classic learning machine learning bernhard ganter sergei obiedkov sebastian rudolph gerd stumme conceptual exploration springer ernesto evgeny kharlamov dmitriy zheleznyakov ian horrocks christoph pinkel martin evgenij thorstensen jose mora bootox practical mapping rdbs owl international semantic web conference iswc pages stanislav kikot roman kontchakov michael zakharyaschev tractability obda owl proceedings international workshop description logics szymon klarman katarina britz ontology learning interpretations lightweight description logics inductive logic programming boris konev michel ludwig dirk walther frank wolter logical difference lightweight description logic journal artificial intelligence research jair boris konev carsten lutz frank wolter exact learning tboxes informal proceedings international workshop description logics pages boris konev carsten lutz ana ozaki frank wolter exact learning lightweight description logic ontologies principles knowledge representation reasoning boris konev ana ozaki frank wolter model learning description logic ontologies based exact learning conference artificial intelligence aaai pages roman kontchakov michael zakharyaschev introduction description logics query rewriting reasoning web semantic technologies advanced query answering international summer school pages exact learning lightweight description logic ontologies markus owl profiles introduction lightweight ontology languages reasoning web semantic technologies advanced query answering international summer school pages jens lehmann christoph haase ideal downward refinement mathcal description logic international conference inductive logic programming ilp pages jens lehmann pascal hitzler concept learning description logics using refinement operators machine learning jens lehmann johanna perspectives ontology learning volume ios press francesca lisi learning system semantic web mining international journal semantic web information systems francesca lisi umberto straccia learning description logics fuzzy concrete domains fundamenta informaticae carsten lutz robert piro frank wolter description logic tboxes characterizations rewritability international joint conference artificial intelligence ijcai pages yue felix distel learning formal definitions snomed text artificial intelligence medicine conference aime pages ana ozaki exact learning description logic ontologies phd thesis university liverpool charles david page constraint logics foundations applications learnability logic learning deduction phd thesis university illinois peter deborah mcguinness alexander borgida classic knowledge representation system guiding principles implementation rationale sigart bulletin antonella poggi domenico lembo diego calvanese giuseppe giacomo maurizio lenzerini riccardo rosati linking data ontologies data semantics chandra reddy prasad tadepalli learning acyclic horn programs entailment inductive logic programming pages sebastian rudolph exploring relational structures via fle international conference conceptual structures iccs pages stefan schlobach zhisheng huang ronald cornet frank van harmelen debugging incoherent terminologies journal automated reasoning boris konev carsten lutz ana ozaki frank wolter joseph selman alan fern learning definite theories via queries european conference machine learning principles practice knowledge discovery databases pages heiner stuckenschmidt christine parent stefano spaccapietra editors modular ontologies concepts theories techniques knowledge modularization volume lecture notes computer science springer balder ten cate dalmau phokion kolaitis learning schema mappings international conference database theory icdt pages johanna mathias niepert statistical schema induction semantic web research applications pages springer johanna daniel fleischhacker heiner stuckenschmidt automatic acquisition class disjointness journal web semantics hai wang matthew horridge alan rector nick drummond julian seidenberg debugging ontologies heuristic approach international semantic web conference iswc pages wilson wong wei liu mohammed bennamoun ontology learning text look back future acm computing surveys exact learning lightweight description logic ontologies appendix proofs section supply proofs lemma lemma addition prove claim used proof lemma start giving proof lemma lemma every signature either every one proof assume given occur claim readily checked thus assume occurs assume exists exists done let canonical model lemma apply following restricted form merging exhaustively concept expression nodes replace resulting concept expression merged let resulting concept expression recall lemma iff homomorphism mapping using fact ditree interpretation one readily check homomorphism mapping factors concept expression thus additional two homomorphisms domain mapping root roots respectively since occurs concept expression find sequence conjunct derived contradiction assumption distinct prove lemma require following observation lemma acyclic tbox concept expression form ready prove lemma lemma every either every number exceed proof prove lemma induction structure assume throughout proof exists base case concept name make following case distinction lemma form concept name follows every boris konev carsten lutz ana ozaki frank wolter lemma form concept name case either either case every form conjunct every assume form neither form form form let notice claim proof lemma must clearly number exceed thus either every number exceed induction step induction hypothesis either every exist different thus either every number also exceed assume suppose lemma either exists conjunct concept name exists conjunct analyse every conjunct form number respectively let conjunct concept name notice consider remaining cases easy see thus every lemma lemma either every suppose inductive applications lemma possible thus exactly one namely suppose equivalently lemma either thus unless let conjunct induction hypothesis implies number exceed exact learning lightweight description logic ontologies summarise either every every conjunct form number exceed hence number exceed next result used proof lemma lemma concept expression exists sequence role names either concept name proof prove lemma induction either role name concept name suppose lemma proved let proceed induction structure concept name done hold concept name form induction hypothesis exists sequence role names lemma lemma form exists lemma holds induction hypothesis
2
flic fast linear iterative clustering active search jiaxing nov qibin nankai university abstract paper reconsider clustering problem image new perspective propose novel search algorithm named active search explicitly considers neighboring continuity based search method design traversal strategy joint assignment update step speed algorithm compared earlier works simple linear iterative clustering slic use fixed search regions perform assignment update step separately novel scheme reduces number iterations required convergence also improves boundary sensitivity results extensive evaluations berkeley segmentation benchmark verify method outperforms competing methods various evaluation metrics particular lowest time cost reported among existing methods approximately fps image single cpu core facilitate development code publicly available introduction superpixels generated image take place pixels become fundamental units various computer vision tasks including image segmentation cheng image classification wang reconstruction hoiem efros hebert object tracking wang etc technique greatly reduce computational complexity avoid undersegmentation reduce influence caused noise therefore generate superpixels high efficiency plays important role many vision image processing applications generating superpixels important research issue group classical methods developed including felzenszwalb huttenlocher mean shift comaniciu meer watershed vincent soille etc lack compactness irregularity superpixels restrict applications especially contrast poor shadows present solve problems shi malik proposed normalized cuts shi malik generated compact superpixels however method ren corresponding author paul cardiff university adhere image boundaries well complexity high graphcut boykov veksler zabih veksler boykov mehrani regarded segmentation problem energy optimization process solved compactness problem using algorithms boykov kolmogorov kolmogorov zabin parameters hard control turbopixel levinshtein another method proposed solve compactness problem however inefficiency underlying method osher sethian restricts applications bergh proposed algorithm seeds whose results adhered boundaries well unfortunately suffers irregularity number superpixels uncertain ers liu although performs well berkeley segmentation benchmark high computational cost limits practical use achanta proposed linear clustering based algorithm slic generates superpixels based lloyd algorithm lloyd also known voronoi iteration assignment step slic key point speed algorithm pixel associated cluster seeds whose search regions overlap location strategy also adopted subsequent works based slic slic widely used various applications wang high efficiency good performance inspired slic wang implemented algorithm sss considered structural information within images uses geodesic distance computed geometric flows instead simple euclidean distance however efficiency poor bottleneck caused high computational cost measuring geodesic distances recently liu proposed manifold slic generated contentsensitive superpixels computing centroidal voronoi tessellation cvt faber gunzburger special feature space advanced technique makes much faster sss still slower slic owing cost mapping splitting merging processes aforementioned descriptions see abovementioned methods improve results either using complicated distance measurements providing suitable transformations feature space however assignment update steps within methods formed separately leading low convergence rate paper consider problem new perspective pixel algorithm allowed actively search superpixel belong according neighboring pixels shown figure meantime seeds superpixels adaptively changed process allows assignment update steps performed jointly property enables approach converge rapidly sum main advantages algorithm features good awareness neighboringpixel continuity produces results good boundary sensitivity regardless image complexity contrast algorithm performs assignment step update step joint manner high convergence rate well lowest time cost among superpixel segmentation approaches experiments show approach able converge two scan loops better performance measured variety evaluation metrics berkeley segmentation benchmark preliminaries introducing approach allows adaptive search regions joint assignment update steps first briefly recap standard previous scheme fixed search regions separate steps typical one slic algorithm improves lloyd algorithm reducing time complexity number superpixels number pixels let color image represents corresponding variable pixel given set evenly distributed seeds slic simplifies lloyd algorithm get centroidal voronoi tessellation cvt faber gunzburger introduced section assignment step pixel associated cluster seeds whose search regions overlap location shown figure area search region denoted specifically slic considers lie five dimensional space contains three dimensional cielab color space two dimensional spatial space slic measures distance two points using weighted euclidean distance computed seed pixel slic seed pixel active search figure search method used slic seed searches limited region reduce computation complexity proposed active search pixel able decide label searching surroundings obtains results iteratively performing assignment update steps works slic also use similar procedure slic improve performance slic using better distance measures suitable transformation function color space spatial space however algorithms search region fixed assignment step single loop relationship among neighboring pixels largely ignored allocating pixels superpixels separately performing assignment step update step also leads delayed feedback pixel label change proposed approach since superpixels normally serve first step vision related applications generate superpixels good boundaries fast speed crucial problem unlike previous algorithms achanta liu consider problem new aspect surrounding pixels considered determining label current pixel pixel actively selects superpixel belong order provide better estimation regions moreover assignment step update step performed jointly iterations required approach reach convergence overview algorithm found alg problem setup given desired number superpixels input image number pixels goal produce series disjoint small regions superpixels following previous works achanta original rgb color space transformed cielab color space proven useful thus pixel image represented five dimensional space update step slic recomputes center superpixel moves seeds new centers first divide original image regular grid containing elements step length variable controls weight spatial term variables respectively spatial color distances expressed algorithm flic require image pixels desired number superpixels maximal iteration numbers itrmax spatial distance weight ensure superpixels divide original image pinto regular grids step length initialize labels pixels according locations move seed lowest gradient position neighborhoods initialize seeds regard pixels sharing label superpixel initialize distance pixel itr itr itrmax superpixel use scan traverse superpixel get pixels processing sequence section pixel sequence set sli eqn compute slj eqn end end end changed use eqn update use eqn update update bounding box section end end end end achanta initial label pixel assigned initialize seed centroid therefore also defined five dimensional space label decision natural images adjacent pixels tend share labels neighboring pixels natural continuity thus propose active search method able leverage much priori information possible method unlike previous works achanta liu label current pixel determined neighbors compute distances current pixel seeds four eight adjacent pixels figure provides intuitive illustration specifically pixel assignment principle argmin slj consists four neighboring pixels slj corresponding superpixel seed use eqn measure distance slj since pixel assigned superpixel containing least one neighbors local pixel continuity stronger effect proposed strategy allowing pixel actively assign one surrounding closely connected superpixel regions advantages strategy obvious first nearby assignment principle avoid occurrence many isolated regions indirectly preserving desired number superpixels second assignment operation limited fixed range space resulting better boundary adherence despite irregular shapes superpixels complicated content furthermore assignment process superpixel centers also modified leading faster convergence detailed demonstration analysis found section worth mentioning neighbors internal pixels superpixels normally share labels unnecessary process fact allows process superpixel extremely quickly traversal order traversal order plays important role approach appropriate scanning order may lead visually better segmentation demonstrated section label pixel depends seeds surrounding pixels indicates superpixel label current pixel directly indirectly related pixels already dealt better take advantage avalanche effect adopt traversal order patchmatch barnes pixels processed later benefit previously processed pixels figure makes process clear forward pass label decision pixel considers information top surrounding pixels superpixel similarly backward pass provide information bottom surrounding pixels superpixel scanning order surrounding information taken consideration yielding better segments considering arbitrary superpixel might irregular shape instead simple rectangle square actually use simplified strategy traverse whole superpixel superpixel first find minimum bounding box within pixels enclosed shown figure perform scanning process pixels corresponding minimum bounding box deal pixels within superpixel joint assignment update step common phenomenon existing methods slic achanta assignment step region bounding box forward scan order backward scan order bounding box updating figure illustration scanning order superpixel use gray regions enclosed blue lines represent superpixels use red dashed rectangles denote corresponding bounding boxes shown first scan bounding box left right top bottom opposite direction shape superpixel might change update bounding box occurs leave unchanged changes superpixel shape date step performed separately leading delayed feedback pixel label changes superpixel seeds obvious problem strategy many normally five iterations required becomes bottleneck fast convergence approach based assignment principle eqn design joint assignment update strategy operate two steps finer granularity approximately joint step able adjust superpixel seed center position fly drastically reducing number iterations needed convergence since superpixel methods use centroidal voronoi tessellation cvt briefly introduce cvt first describe method let set seeds image expected number superpixels voronoi cell seed denoted vsk arbitrary distance measure pixel seed voronoi diagram defined vsk cvt defined voronoi diagram whose generator point voronoi cell also center mass mentioned traditional cvt usually obtained heuristic algorithms lloyd algorithm iteratively performing updates assignment step convergence reached approach account novel label decision strategy shown eqn able jointly perform update step assignment step instead separately specifically pixel processed label changed instance immediately update current seed sli using following equation sli sli update slj using following equation slj number pixels superpixel bounding box also updated thereafter noteworthy mention updates contain simple arithmetic operations hence performed efficiently immediate update help later pixels make better choice assignment leading better convergence figure shows convergence speed approach experiments method implemented runs intel core cpu ram bit operating system compare method many previous current works including felzenszwalb huttenlocher slic achanta manifold slic liu seeds van den bergh ers liu benchmark using evaluation methods proposed arbelaez stutz hermans leibe note source codes used evaluation works may different versions find leads performance difference original reports different implementation evaluation code applied give fair comparison uniformly use publicly available source code arbelaez stutz hermans leibe methods previous researches literature liu wang evaluate algorithms randomly selected images resolution berkeley dataset slj parameters approach three parameters need set first one number superpixels one common advantages algorithms expected ers seeds slic undersegment error boundary recall number superpixels curves time seconds ers seeds slic time seconds asa boundary recall ers seeds slic achievable segmentation accuracy ers seeds slic time seconds figure comparisons existing methods approach flic benchmark fixed demonstrate best performance efficiency competing methods seen strategy significantly outperforms methods similar time cost boundary recall least competitive results also achieved compared slower methods method ers liu evaluation metrics order magnitude faster speed itr normal itr itr time seconds boundary recall time time time time itr number superpixels number superpixels time cost figure part sensitivity analysis standard evaluation metrics time cost number superpixels directly obtained setting clustering parameter second one spatial distance weight parameter large effect smoothness compactness superpixels shall show performance increase decreases however small also lead irregularity superpixels achieve good compactness performance following experiments set default last parameter number iterations itr set itr default get balance time cost performance stressed compare methods fair way method optimize parameters maximize recall value computed benchmark table boundary recall time cost comparisons different superpixel counts comparison existing methods approach outperforms previous methods similar computational efficiency achieve least comparable results compared slower algorithms order magnitude faster speed details discussed boundary recall boundary recall measurement denotes adherence boundaries computes fraction ground truth edges falls within length least one superpixel boundary achanta computed brg respectively denote union set superpixel boundaries union set ground truth boundaries indicator function checks nearest pixel within distance follow achanta liu set experiment boundary recall curves different methods plotted figure one easily observe flic method outperforms methods undersegment error undersegment error reflects extent superpixels exactly overlap ground truth segmentation similar also reflect boundary adherence difference uses segmentation regions instead boundaries measurement mathematically neubert protzel computed min sin sout union set superpixels union set segments ground truth sin denotes overlapping superpixel ground truth segment sout denotes rest superpixel shown figure results nearly best approach ers liu run significantly faster achievable segmentation accuracy asa asa gives highest accuracy achievable object segmentation utilizes superpixels units similar asa utilizes segments instead boundaries computed liu maxi asag slic boundary recall boundary recall spatial distance weight slic iteration figure curves spatial distance weight eqn overall performance far better slic tested curves method converges within iterations much faster slic boundary recall jointly separately iteration method approach adheres boundaries well runs twice fast compared ers method resulting superpixels much regular mean execution time approach times shorter aforementioned facts figure reflect approach achieves excellent compromise among adherence compactness time cost figure comparison convergence rate joint separate assignment update steps represents superpixel represents ground truth segment better superpixel segmentation larger asa value shown figure compared ers liu performance approach competitive method achieves best performance time cost time cost similar slic method also achieves time complexity know computation efficiency one important points using superpixels elementary units many approaches limited speeds sss wang ers liu shown figure average time cost flic two iterations processing image time costs ers manifold slic slic respectively obvious flic lowest time cost among methods runs nearly times faster ers comparable result quality visual results analysis figure show several superpixel segmentation results using different algorithms seen approach sensitive image boundaries especially poor contrast foreground background compared slic algorithm analysis efficacy traverse order shown figure adopt traverse order scan whole region enclosed bounding box superpixel actually couple forward scans also perform well method provide comparison two strategies using pure forward scan order four iterations versus using proposed scan order twice also four iterations figure shows quantitative comparisons two strategies blue line represents results using normal forward scan order red line stands results using method seen red curve significantly outperforms blue one achieves competitive time cost compared blue curve fact reflects scan order considers information regions outside bounding box leading reliable boundaries role spatial distance weight shown figure unlike slic achanta curve respect spatial distance weight monotonically decreasing approach reason phenomenon method local region continuity mostly ensured active search algorithm color boundaries less well preserved larger hand small result less regular superpixels choose comparison previous works noteworthy mention superpixels normally considered first step vision tasks vision tasks often favor superpixel methods good boundaries therefore users select reasonable value according specific conditions case overall performance significantly better values convergence rate flic significantly accelerates evolution need iterations convergence compare performance curves different iterations berkeley benchmark easily found figure algorithm quickly converges within two iterations iterations bring marginal benefits results numerically boundary recall superpixels one iteration set value two iterations three iterations generating number superpixels undersegment error values respectively achievable segmentation accuracy values respectively seen figure algorithm converges much faster slic requires ten iterations converge also obtains better performance role joint assignment update algorithm jointly performs assignment update steps figure show convergence rates slic seeds ers figure visual comparison superpixel segmentation results using different existing algorithm superpixels approach adheres boundaries well time produces compact superpixels figure images segmented proposed approach number superpixels set respectively resulting superpixels adhere region boundaries well figure images segmented proposed approach respectively tends smaller value superpixels adhere well boundaries becomes larger superpixels become compact joint approach separately performing assignment update steps one observe joint approach converges quickly two iterations needed separate approach needs another two iterations reach value phenomenon demonstrates joint approach efficient without negative effect final results effect size neighborhoods mentioned section implementation label current pixel relies four neighborhood pixels actually using eight neighborhood pixels also reasonable neighbors definitely provide useful information table briefly compare results two cases natural observation using larger neighborhoods leads increase performance cost reducing running speed regard real applications users select either case suit preferences qualitative results figure show segmentation results produced approach number superpixels set respectively seen range value edges resulting superpixels always close boundaries phenomenon especially obvious first image third image also show segmentation results different values figure tends smaller values example shapes resulting superpixels become less regular larger example resulting superpixels become compact conclusions paper present novel algorithm using active search able improve performance significantly reduce time cost using superpixels oversegment image taking advantage local continuity algorithm provides results good boundary sensitivity regardless image contrast complexity moreover able converge two iterations achieving lowest time cost compared previous methods ing performance comparable method ers running time used various evaluation metrics berkeley segmentation benchmark dataset demonstrate high efficiency high performance approach acknowledgments research sponsored nsfc cast huawei innovation research program hirp ibm global sur award references achanta shaji smith lucchi fua slic superpixels compared superpixel methods ieee trans pami arbelaez maire fowlkes malik contour detection hierarchical image segmentation ieee trans pami barnes shechtman finkelstein goldman patchmatch randomized correspondence algorithm structural image editing acm transactions boykov kolmogorov experimental comparison algorithms energy minimization vision ieee trans pami boykov veksler zabih fast approximate energy minimization via graph cuts ieee trans pami cheng liu hou bian torr hfs hierarchical feature selection efficient image segmentation eccv springer comaniciu meer mean shift robust approach toward feature space analysis ieee trans pami faber gunzburger centroidal voronoi tessellations applications algorithms siam review felzenszwalb huttenlocher efficient graphbased image segmentation international journal computer vision hoiem efros hebert automatic photo acm transactions graphics tog kolmogorov zabin energy functions minimized via graph cuts ieee trans pami levinshtein stere kutulakos fleet dickinson siddiqi turbopixels fast superpixels using geometric flows ieee trans pami liu tuzel ramalingam chellappa entropy rate superpixel segmentation computer vision pattern recognition cvpr ieee conference ieee liu manifold slic fast method compute superpixels proceedings ieee conference computer vision pattern recognition lloyd least squares quantization pcm ieee transactions information theory neubert protzel superpixel benchmark comparison proc forum bildverarbeitung osher sethian fronts propagating speed algorithms based formulations journal computational physics keriven cohen geodesic methods computer vision graphics foundations trends computer graphics vision shi malik normalized cuts image segmentation ieee trans pami stutz hermans leibe superpixel segmentation using depth information rwth aachen university aachen germany van den bergh boix roig capitani van gool seeds superpixels extracted via sampling european conference computer vision springer veksler boykov mehrani superpixels supervoxels energy optimization framework european conference computer vision springer vincent soille watersheds digital spaces efficient algorithm based immersion simulations ieee trans pami wang yang yang superpixel tracking international conference computer vision ieee wang zeng gan wang zha superpixels via geodesic distance international journal computer vision wang feng yan image classification via holistic superpixel selection ieee transactions image processing
1
asymptotically optimal indirect approach system identification mar rodrigo cristian rojas james welsh indirect approach system identification consists estimating models first determining appropriate model hold sampling mechanism approach usually leads transfer function estimate relative degree independent relative degree strictly proper real system paper refinement methods developed inspired indirect pem propose method enforces fixed relative degree transfer function estimate show resulting estimator consistent asymptotically efficient extensive numerical simulations put forward show performance estimator contrasted indirect direct methods system identification index system identification systems parameter estimation sampled data ntroduction system identification deals problem estimating adequate models dynamical systems data methods developed years field seen applications many areas science engineering comprehensive literature written subject postulating mathematical model describing dynamical system based sampled data one must decide obtaining continuoustime model predominantly digital era system identification studied thoroughly see nevertheless interest models still persists due advantages example greybox modelling commonly based physical principles conservation laws naturally suited continuous time parameters usually better interpreted domain also models known intuitive dynamics depend sampling period system identification two main approaches namely direct indirect approaches direct system identification model obtained directly sampled data main difficulty present direct methods handling derivatives immediately available discrete data points without amplifying work supported swedish research council contract number newleads rojas department automatic control access linnaeus centre kth royal institute technology stockholm sweden grodrigo crro james welsh school electrical engineering computer science university newcastle australia noise effectively deal issue many well known methods proposed success real applications hand indirect methods modelling first determine suitable model via system identification methods like prediction error methods pem maximum likelihood transform model equivalent model evidence shown regarding advantages direct indirect model identification although precise initialisation pem approaches seem comparable certain sampling periods even though indirect approach seems easy implement much theory literature concerning system identification reasons approach always recommended first may suffer numerical inaccuracies fast sampling requires precise initialisation addition possible select desired numerator order model estimated model generally lead model relative degree case sampling hold mechanism hence unnecessarily complex model structure indirectly estimated leads loss accuracy according parsimony principle paper introduce method optimally imposes desired relative degree indirect approach system identification based indirect pem prove proposed estimator consistent asymptotically efficient estimator system true parameter vector extensive numerical simulations show new method imposes correct relative degree improving statistical properties transfer function estimate achieves performance compares favourably standard direct indirect approaches remainder paper organised follows section problem formulated section iii provides introduction indirect approach system identification section derive estimator optimally enforces desired relative degree indirect approach determine properties section illustrates method extensive numerical examples finally conclusions drawn section roblem formulation consider linear causal stable single input single output system heavyside operator relative degree system paper denote true system parameter vector suppose signals sampled period resulting output contaminated additive white noise sequence variance goal system identification obtain transfer function estimate given discrete data measurements knowledge physical characteristics system intersample behaviour paper assume input piecewise constant signal samples hold behaviour obtaining model simple way proceed identify hold equivalent model given input output data measurements using standard pem domain return domain via hold equivalences although procedure good statistical properties impose relative degree constraints domain goal optimally impose constraint lead statistically improved estimate iii indirect approach continuous time system identification one approach identifying system first estimate model given input output data samples translate model continuous time called indirect approach since relies system identification theory instead obtaining immediately model using system identification methods much literature written regarding first step indirect approach theoretically optimal solution apply maximum likelihood method method known give consistent asymptotically efficient estimates general conditions assumption additive white noise gaussian method equivalent pem one celebrated parametric methods available matlab system identification toolbox model required propose model structure form define denote model output estimate arg min kym next step transform transfer function adequate model done several ways example tustin transformation applied transfer function estimate letting reported assumed input signal piecewise constant natural mapping used hold sampling equivalence denote laplace transforms respectively mapping known troublesome part indirect approach ill conditioned uniqueness depends correct choice sampling period mappings resulting model numerator parameters exceed desired numerator orders generally case relative degree greater problem contributes poor accuracy high standard deviations high frequencies one simple way treating issue setting zero numerator coefficients zero best way taking care information ptimal enforcement relative degree section develop estimator parameter vector renders transfer function estimate desired relative degree matter first focus pem estimator zeroorder hold equivalent model simplicity assume correct model order found model structure obtained pem covariance matrix also estimated know hold equivalent estimated model general given define denote true parameter vector parameters related hold equivalence equations derived using comparing coefficients relation nonlinear mapping differentiable almost everywhere hence following asymptotic relationship valid covariance matrices jacobian matrix evaluated naive estimation throwing away high order coefficients numerator propose find appropriate projection proper subspace parameter space yields desired relative degree subspace simply one formed vectors first elements set zero hence decide study following problem arg min identity matrix dimension null matrix appropriate dimensions optimisation problem context interpreted application indirect pem lagrange multiplier theory optimization problem equivalent calculating suitable arg min partitioning appropriately dropping subindex simplicity differentiating respect obtain imposing obtain denote cholesky factorization matrix lower triangular matrix positive diagonal entries write estimator seen best approximation pem estimate imposes desired relative degree properties briefly present important properties estimator following theorems theorem consider system described gaussian white noise sequence assume sampling frequency larger twice largest imaginary part poles delay real estimator consistent asymptotically efficient estimator real vector parameter provided model set chosen relative degree contains real system note standard pem estimate could also used matter asymptotic relation still holds conditions relaxed long sampling frequency transformation well defined proof gaussian noise assumption pem estimate interpreted estimate proposed sampling frequency transformation unique sufficiently large hence invariance principle estimators equivalent system parameters also estimate prove theorem require assumptions satisfied scenario directly apply results obtained cited contribution first note model structure given procedure contains models desired relative degree contention proper linear mapping parameter vectors given matrix furthermore provided model set contains real system structures give parameter identifiability also note obtained via consistent estimate covariance matrix hence results namely estimation normalized follow asymptotically normally distributed zero means asymptotic covariance matrices satisfy relation ascov ascov moreover following steps section improved pem estimate asymptotic distribution thereby proving asymptotic efficiency remark note asymptotic covariances shown satisfy following properties ascov ascov ascov ascov claims follow applying properties chapter context properties imply sufficiently large proposed estimator decrease covariance estimated parameters compared standard pem asymptotic covariances satisfy pythagorean relation pem estimate projected orthogonally proper subspace lies next establish imposing larger relative degree improves accuracy estimates provided highest relative degree model structure contains real system theorem given plant order relative degree consider candidate models relative degree improved pem parameter vector estimates respectively asymptotic covariance matrices satisfy ascov ascov proof proof follows applying theorem details omitted due length restrictions simplicity assume vector appropriate dimension zeros considered first terms necessary remark relative degree system always known practical applications physical knowledge system give intuition addition statistical measures used coefficient determination young information criterion differences correct relative degree imposed estimator used command matlab cases srivc implemented contsid toolbox matlab chapter default initialisation set estimate model onte arlo simulation studies effect number data points sampling rate study designed input pseudorandom binary sequence prbs amplitude switches first input sequence number stages shift register data length shortest interval hence sequence data points obtained noise white gaussian noise signal variance set ratio snr noiseless output sequence noise equals three monte carlo studies performed simulations different noise realisations one results shown table study performance proposed estimator series experiments system considered system linear phase system complex poles tested many publications see system identification system interesting since two damped oscillatory modes damping respectively phase zero known particularly difficult system estimate since methods may converge local minimum well initialised three methods compared pem pem relative degree enforcement labeled pemrd simplified refined method systems srivc one successful direct methods available suggested general use one recent surveys system identification method tested different monte carlo simulations evaluated according average normalized square error system estimate mse table onte arlo simulation results prbs input total length method pem pemrd srivc pem pemrd srivc pem pemrd srivc mse mse fit average normalized square error parameter vectors mse fit measure fit output sequence simulated data without additive measurement noise simulated output sequence estimated model average value run pem using standard matlab system identification toolbox command assumed correct order system known search algorithm initialised estimate given null space fitting based pemrd pem estimate previously obtained required jacobian matrix numerically calculated via finite pem initialised estimate srivc also tested similar results order analyse performance estimator less data set number stages data length shortest interval resulting input data points snr test results monte carlo simulations sampling period found table table onte arlo simulation results prbs input total length method pem pemrd srivc pem pemrd srivc pem pemrd srivc mse mse fit tables show refined pem estimator statistically improves estimates given pem different input signal also tested taken data measurements multisine input given sum sine waves angular frequencies standard deviation additive noise set median monte carlo simulations normalized model error normalized parameter error fit obtained results shown table iii bode diagram table iii bode diagrams magnitude multisine input simulations setup section clear fig improvement pem mainly high frequencies intuitive since relative degree determines asymptotic slope bode diagram magnitude proposed method enforces true asymptotic slope leading important gain accuracy magnitude phase phase deg competitive method srivc even high frequency sampling note less data points pemrd still outperforms pem every sampling period indicates asymptotic properties studied section observed practical finite data cases well remark tables discarded cases pem delivered estimates one pole negative real axis fortunately scenario uncommon cases seen simulations similar phenomenon observed table srivc estimates gave negative fit values simulations considered either onte arlo simulation results multisine wave input total length method pem pemrd srivc pem pemrd srivc frequency fit input sampling method srivc performs poorly pem pemrd normally reach global optimum even though fit near optimal pemrd consistently better standard pem metrics mean covariance estimated parameters established theorem improved pem estimate reduce least asymptotically covariance parameter vector test obtained mean standard deviation parameter given monte carlo study setup section results shown table observed mean values similar methods except standard pem estimate correct model structure pemrd provides lowest standard deviation every parameter direct comparison standard pem subsection analyse improvements novel method standard pem estimates show impact selecting enforcing correct relative degree focused bode diagrams resulting models pem pemrd figure plotted frequency response estimates monte carlo fig bode diagram estimates pem green estimates pemrd blue real system red also direct comparison plots pem pemrd shown figure plots compare normalized model parameter error fit pem pemrd monte carlo experiment total pemrd outperforms standard pem monte carlo simulations specially fit comparison experiments lead increase fit pem refinement random systems obtain average performance estimator tested data set created random systems order relative degree using rss command matlab slowest pole set real part less input unit variance gaussian white noise additive white noise also gaussian standard deviation equal maximum value noiseless output sampling period chosen times faster fastest pole zero real system computed median metrics used section random systems also counted failures estimators cases estimates produced negative fit algorithm crashed possible correctly initialise pem estimate tried reducing factor sampling rate initialisation estimation results seen table although estimators report failures pemrd shows promising results table stimated parameter value means standard deviations method considering method pem pemrd srivc model error norms pemrd random systems put forward promising results shown refinement standard pem leads important improvement statistical metrics studied performance comparable better srivc sampling periods study provided pem initialised correctly parameter error norms pemrd parameter true value mean std dev mean std dev mean std dev eferences pem fit pem pemrd pem fig direct comparison plots pem pemrd monte carlo simulations green dots correspond monte carlo simulations pemrd outperforms pem red dots represent opposite dashed blue line separatrix table onte arlo simulation results random systems method pem pemrd srivc fit failures metrics pointed seen tests initialisation aspects fact major issue concerning reliability algorithms onclusions proposed refinement standard pem estimator indirect system identification achieves asymptotically optimal desired relative degree enforcement explicit expression estimator found statistical properties analysed extensive simulations using standard benchmarks stoica system identification ljung system identification theory user edition pintelon schoukens system identification frequency domain approach john wiley sons bohlin practical process identification theory applications springer rao unbehauen identification systems iee theory applications garnier wang identification models sampled data springer garnier direct approaches system identification overview benefits practical applications european journal control rao garnier numerical illustrations relevance direct model identification ifac proceedings volumes ljung experiments identification continuous time models ifac proceedings volumes stoica friedlander indirect prediction error method system identification automatica wittenmark computer controlled systems theory design ljung system identification toolbox getting started guide unbehauen rao approaches system identification survey automatica sinha identification systems samples data introduction sadhana horn johnson matrix analysis edition cambridge university press kollar franklin pintelon equivalence models system identification instrumentation measurement technology conference vol ieee zehna invariance maximum likelihood estimators annals mathematical statistics gourieroux monfort statistics econometric models vol cambridge university press valenzuela rojas rojas optimal enforcement causality transfer function estimation ieee control systems letters young recursive estimation forecasting adaptive control control dynamic systems part ljung initialisation aspects subspace identification methods european control conference ecc welsh alamir system identification using indirect inference ifac proceedings volumes galrinho rojas hjalmarsson weighted fitting method arxiv preprint
3
data mining meets optimization case study quadratic assignment problem oct yangming zhou duval paper presents hybrid approach called frequent pattern based search combines data mining optimization proposed method uses data mining procedure mine frequent patterns set solutions collected previous search mined frequent patterns employed build starting solutions improved optimization procedure presenting general approach composing ingredients illustrate application solve challenging quadratic assignment problem computational results hardest benchmark instances show proposed approach competes favorably algorithms terms solution quality computing time index pattern mining heuristic design optimization quadratic assignment ntroduction ecent years hybridization data mining techniques metaheuristics received increasing attention optimization community number attempts made use data mining techniques enhance performance metaheuristic optimization solving hard combinatorial problems data mining involves discovering useful rules hidden patterns data help data mining techniques heuristic search algorithms hopefully make search strategies informed thus improve search performances instance candidate solutions visited search algorithm collected exploited supervised unsupervised learning techniques extract useful hidden patterns rules used guide future search decisions search algorithm generally data mining techniques could help build starting solution initial population choose right operators set suitable parameters make automatic algorithm configuration work introduce hybrid approach called frequent pattern based search fpbs combines data mining techniques optimization methods solving combinatorial search problems basically fpbs employs data mining procedure mine useful patterns frequently occur elite solutions collected previous search uses mined patterns construct new starting solutions improved optimization method update set elite solutions used pattern mining fpbs calls dedicated procedure manage newly discovered elite solutions key intuition behind proposed approach frequent pattern extracted solutions fies particularly promising region search space worthy intensive examination optimization procedure using multiple mined patterns combined effective optimization procedure fpbs expected achieve suitable balance search exploration exploitation critical high performance search algorithm verify interest proposed fpbs approach consider highly challenging quadratic assignment problem qap case study besides popularity one studied combinatorial optimization problem qap also relevant representative many permutation problems apply general fpbs approach solve qap specify underlying patterns frequent patterns frequent pattern mining algorithm dedicated optimization procedure assess resulting algorithm set hardest qap benchmark instances qaplib experimental results show proposed algorithm highly competitive compared algorithms terms solution quality computing time rest paper organized follows next section introduce concept frequent pattern mining provide literature review hybridization association rule mining metaheuristic optimization section present proposed frequent pattern based search approach section shows application general fpbs approach solve quadratic assignment problem section dedicated experimental analysis key components proposed approach finally conclusions work discussed section zhou hao corresponding author duval department computer science leria angers boulevard lavoisier angers france hao also affiliated institut universitaire france rue descartes paris france requent pattern mining heuristic search state art section briefly recall concept frequent pattern mining review related literature combination search methods association rule mining frequent pattern mining techniques solving combinatorial optimization problems frequent pattern mining frequent pattern mining originally introduced market basket analysis form association rules mining purpose identifying associations different items customers place shopping baskets also concept frequent itemsets first introduced mining transaction database let transaction database defined set items frequent itemset typically refers set items often appear together transactional dataset milk bread frequently bought together grocery stores many customers furthermore consists items frequent occurs transaction database lower times minimum support threshold original model frequent pattern mining problem finding association rules also proposed association rules closely related frequent patterns sense association rules considered output derived mined frequent patterns given two itemsets rule considered association rule minimum support minimum confidence following two conditions satisfied simultaneously set frequent pattern minimum support ratio support least association rule frequent pattern mining heuristic search review existing studies hybridization data mining heuristic search starting issue representation frequent patterns representation frequent patterns mined knowledge information represented form frequent patterns association rules frequent patterns patterns occur frequently given data various types patterns itemsets subsequences substructures apply frequent pattern mining solving combinatorial optimization problems one main challenges define suitable pattern problem consideration following introduce definition frequent patterns terms frequent items two categories representative optimization problems frequent pattern subset selection problems subset selection basically determine subset specific elements among set given elements optimizing objective function defined selected elements unselected elements subset selection encountered many situations includes instance knapsack problems clique problems diversity problems maximum stable problems critical node problems candidate solution subset selection problem usually represented set selected elements since natural treat element item frequent pattern mining techniques directly applied subset selection problems setting transaction corresponds solution subset selection problem pattern conveniently defined subset elements frequently appear specific solutions thus given database set visited solutions frequent pattern mined support corresponds set elements occur least times database frequent pattern permutation problems permutation problems cover another large range important combinatorial problems many classic problems traveling salesman problem quadratic assignment problem graph labeling problems flow shop scheduling problems typical permutation examples even solutions category problems represented permutations practical meaning permutation depends problem consideration apply frequent pattern mining techniques problem transformation permutation itemset necessary instance context vehicle routing problem transformation proposed map permutation sequence visited cities set pairs two consecutive elements cities permutation transformation emphasizes order two consecutive elements section introduce new transformation qap whose main idea decompose permutation set pairs focuses order elements also relation element location settings frequent pattern considered set pairs frequently appear specific solutions mining heuristics section present brief literature review hybridizing association rule mining frequent pattern mining techniques metaheuristics reviewed studies summarized table grasp greedy randomized adaptive search procedure first metaheuristic hybridized data mining techniques association rule mining denoted approach originally designed solve set packing problem organizes search process two sequential phases incorporates association rule mining procedure second phase specifically solutions found first phase grasp stored elite set data mining procedure invoked extract patterns elite set second phase new solution constructed based mined pattern instead using greedy randomized construction procedure grasp approach applied solve several problems including maximum diversity problem server replication reliable multicast problem problem survey significant applications table main work hybridizing association rules mining techniques metaheuristics solving combinatorial optimization problems algorithm optimization problem gadmls hdmns vals gaar gaar grasp grasp grasp grasp grasp ils set packing problem maximum diversity problem server replication reliable multicast problem routing problem problem problem network design problem server replication reliable multicast problem problem set partitioning problem constrain satisfaction problem weighted constrain satisfaction problem traveling salesman problem set covering pairs problem year greedy randomized adaptive search procedure grasp genetic algorithm local search neighborhood search variable neighborhood descent vnd iterated local search ils found interesting extension execute data mining procedure multiple times instead compared data mining call occurs midway whole search process version performs mining task elite set stagnates idea explored recently hybridizing data mining grasp enhanced variable neighborhood descent vnd addition grasp data mining also hybridized metaheuristics like evolutionary algorithms improve performance evolutionary algorithm applied oil collecting vehicle routing problem hybrid algorithm gadmls combining genetic algorithm local search data mining proposed another hybrid approach gaar uses data mining module guide evolutionary algorithm presented solve constraint satisfaction problem csp besides standard components genetic algorithm data mining module added find association rules variables values archive best individuals found previous generations apart grasp evolutionary algorithms shown heuristics also benefit incorporation data mining procedure example data mining approach applied extract variable associations previously solved instances identifying promising pairs flipping variables large neighborhood search method set partitioning problem thus reducing search space local search algorithms set partitioning problem another example hybridization neighborhood search data mining techniques solving problem data mining procedure also integrated multistart hybrid heuristic problem combines elements different traditional metaheuristics local search uses memorybased intensification mechanism finally iterated local search method recently hybridized data mining solving set covering pairs problem requent pattern based search section propose frequent pattern based search fpbs solving combinatorial search problems search approach tightly integrates frequent pattern mining optimization procedure first show general scheme proposed fpbs approach present composing ingredients general scheme fpbs approach uses relevant frequent patterns extracted solutions build promising starting solutions improved optimization procedure improved solutions turn used help mine additional patterns iterating mining procedure optimization procedure fpbs expected examine search space effectively efficiently case study illustrate section application solve quadratic assignment problem perspective system architecture fpbs maintains archive solutions called elite set purpose pattern mining includes five critical operating components initialization procedure section optimization search procedure section data mining procedure section frequent pattern based solution construction procedure section elite solution management procedure section general framework proposed fpbs approach presented algorithm fpbs starts set solutions obtained initialization procedure line solutions data mining procedure invoked mine number frequent patterns line new solution constructed based mined pattern improved optimization procedure lines improved solution finally inserted elite set according elite solution management policy line process repeated stopping condition time limit satisfied addition invocation initialization procedure data mining procedure also called time elite set judged stagnating lines updated predefined number iterations algorithm general framework frequent pattern based search input problem instance elite set size number patterns mined output best solution found begin supposed minimization problem elitesetinitialize set elite solution arg min best solution found far requentp atternm ine stopping condition reached atternselection construct new solution based mined frequent pattern atternbasedconstruct improve constructed solution optimize update best solution found far update elite set elitesetu pdate restart mining procedure elite set stagnates elite set stagnates requentp atternm ine elite set initialization fpbs starts search elite set composed distinct solutions build elite set first generate means random greedy construction method initial solution improved optimization procedure see section improved solution inserted elite set according elite set management strategy see section repeat process elite set different solutions built note similar ideas successfully applied build initial population memetic algorithms frequent pattern mining procedure approach frequent pattern mining procedure used discover patterns frequently occurs solutions stored elite set beside itemsets see section mined patterns also subsequences substructures subsequence order items buying mobile phone first power bank finally memory card subsequence occurs frequently transaction database frequent sequential pattern substructure refer different structural forms subgraphs subtrees sublattices may combined itemsets subsequences substructure occurs frequently graph database called frequent structural pattern handle wide diversity data types numerous mining tasks algorithms proposed literature simple summary frequent pattern mining algorithms provided table specific application patterns solutions collected elite set expressed itemsets subsequences substructures form patterns determined suitable mining algorithm selected accordingly case study qap presented section pattern corresponds set pairs thus frequent patterns conveniently represented frequent itemsets set identical assignments consequently adopt fpmax algorithm see section details mine maximal frequent items detailed reviews frequent pattern mining algorithms found optimization procedure purpose solution improvement built initial elite set improve new solution built mined pattern optimization procedure dedicated given problem applied principle hand since optimization component ensures key role search intensification desirable call powerful search algorithm basically optimization procedure considered black box optimizer called improve input solution practice optimization procedure based local search search even hybrid memetic search case search procedure must carefully designed respect problem consideration ideally effective terms search capacity time efficiency show section quadratic assignment problem considered work adopt powerful breakout local search procedure optimization procedure solution construction based mined pattern set frequent patterns extracted elite set new solutions constructed based mined patterns purpose first select mined frequent pattern tournament selection strategy follows let size tournament pool randomly choose individuals replacement mined pattern set pick best one largest size parameter computational complexity selection strategy advantage tournament selection strategy selection pressure easily adjusted changing size tournament pool larger tournament pool smaller chance shorter patterns selected since frequent patterns usually correspond set common elements shared solutions examined mining procedure mined pattern directly defines partial solution obtain whole solution apply greedy random procedure complete partial solution way build solution shares similarity general crossover procedure however compared notion backbones typically shared two parent solutions frequent patterns naturally shared two solutions sense backbones considered special case general frequent patterns table simple summary frequent pattern mining algorithms algorithms tasks patterns frequent itemset mining sequential pattern mining structural pattern mining itemsets subsequences substructures apriori gsp spade agm fsg fpmax prefixspan gspan ffsm finally possible construct new solution mined pattern instead using long pattern selected tournament selection strategy explained also elite solution selected guide construction new solution particular way use frequent patterns determined according studied problem elite set management explained new solution constructed using mined frequent pattern improved optimization procedure decide whether improved solution inserted elite set number updating strategies literature applied within fpbs approach example classic replacement strategy simply inserts solution replace worst solution better worst solution addition elaborated updating strategies consider criteria quality solutions example updating strategy considers quality solution also distance solutions population suitable elite set management strategy determined according practical problem flow distance matrices respectively solution represents location chosen facility optimization objective find permutation sum products flow distance matrices minimized addition facility location application qap used formulate many problems electrical circuit transportation engineering parallel distributed computing image processing analysis chemical reactions organic compounds moreover number classic nphard problems traveling salesman problem maximum clique problem bin packing problem graph partitioning problem also recast qaps due practical theoretical significance qap attracted much research effort since first formulation fact qap among studied competitive combinatorial optimization problems since exact algorithms unpractical instances size larger large number heuristic methods proposed qap provide solutions reasonable computation times detailed reviews heuristic metaheuristic algorithms developed till qap available updated review recent studies qap found fpbs applied quadratic assign fpbs qap ment problem section present case study applying general fpbs approach quadratic assignment problem qap show competitiveness compared qap algorithms quadratic assignment problem quadratic assignment problem nphard combinatorial optimization problem originally introduced koopmans beckman model locations indivisible economic activities capital equipment qap aims determine minimal cost assignment facilities locations given flow aij facility facility distance buv locations let denotes set possible permutations qap mathematically formulated follows min aij algorithm shows fpbs algorithm qap denoted instantiation general scheme algorithm since inherits main components fpbs hereafter present specific features related qap solution representation evaluation optimization procedure frequent pattern mining qap solution construction using qap patterns elite set update strategy solution representation neighborhood evaluation given qap instance facilities locations candidate solution naturally represented permutation location assigned facility search space thus composed possible permutations solution quality given examine search space adopt iterated local search algorithm called bls see section purpose introduce neighborhood used bls give solution permutation neighborhood defined set possible permutations algorithm fpbs algorithm qap input instance elite set size number mined patterns time limit tmax maximum number iterations without updating max update output best solution found far begin elitesetinitialize arg min requentp atternm ine update tmax atternselection build new solution based selected pattern atternbasedconstruct improve constructed solution breakoutlocalsearch update best solution found far update elite set elitesetu pdate rue update else update update restart mining procedure elite set steady update max update requentp atternm ine update obtained exchanging values two different positions size indicated given permutation objective value objective value neighnoring permutation effectively calculated according incremental evaluation technique breakout local search ensure effective examination search space adopt breakout local search bls algorithm algorithms qap currently available literature bls follows iterated local search scheme iteratively alternates descent search phase find local optima dedicated perturbation phase discover new promising regions bls starts initial random permutation improves initial solution local optimum best improvement descent search neighborhood upon discovery local optimum bls triggers perturbation mechanism perturbation mechanism adaptively selects perturbation called directed perturbation random perturbation called undirected perturbation bls also determines number perturbation steps called perturbation strength adaptive way perturbation random perturbation provide two complementary means search tion former applies selection rule favors neighboring solutions minimize objective degradation constraint neighboring solutions visited last iterations tabu tenure latter performs moves selected uniformly random keep suitable balance intensified search diversified search bls alternates probabilistically two perturbations probability select particular perturbation determined dynamically according current number times visit local optima without improvement best solution found probability applying perturbation random perturbation empirically limited least perturbation strength determined based simple reactive strategy one increases one search returns immediate previous local optimum otherwise resets given initial value type strength perturbation determined selected perturbation strength applied current solution resulting solution used starting solution next round descent search procedure see details mining frequent patterns qap quadratic assignment problem typical permutation problem whose solutions naturally represented permutations qap define frequent pattern set identical assignments shared highquality solutions represent frequent pattern itemset apply frequent itemset mining algorithm need transform permutation set items transformation recently proposed transformation works follows pair elements given permutation arc generated maps permutation set arcs example considering permutation represented sequence elements mapped transformation permutation divided set arcs conserves order elements however transformation loses information elements locations practice identify true location element part pairs available overcome difficulty propose new transformation transformation decomposes permutation set ordered pairs pair formed element facility position location specifically let candidate solution qap element facility corresponding pair generated transforms permutation set elementposition pairs example given corresponding set pairs permutation transformed set pairs treat pair item length permutation task mining frequent patterns multiple permutations conveniently transformed task mining frequent itemsets main drawback mining frequent itemsets large frequent itemset almost subsets itemset might examined however usually sufficient find maximal frequent itemsets maximal frequent itemset superset frequent thus mining frequent itemsets reduced mine maximal frequent itemsets purpose adopt popular fpmax algorithm algorithm pseudo code fpmax algorithm input output updated begin contains single path insert else set ull frequent items solution construction based mined pattern algorithm describes main steps construct new solutions based mined frequent patterns initially pattern first selected set mined frequent patterns according tournament selection strategy line chosen pattern partial solution line length partial solution less given threshold use elite solution guide construction lines specifically first select elite solution called guiding solution elite set line complete based guiding solution directly copy elements unassigned positions elements assigned finally still incomplete solution randomly assign remaining elements unassigned positions full solution obtained line algorithm solution construction based mined frequent patterns else frequent items conditional pattern base sort ail decreasing order items counts subset checking ail alse construct conditional array initialize conditional call fpmax merge algorithm shows pseudo code fpmax algorithm description basic concepts used fpmax frequent pattern tree method maximal frequent itemset tree found provide brief presentation general procedure initial call constructed first scan database together initial empty recursion one single path single path together mfi dataset mfi inserted line otherwise item header table set prepare recursive call fpmax items header table processed increasing order frequency maximal frequent itemsets found frequent subsets line lines use array technique line invokes subset checking function check together frequent items conditional pattern base subset existing mfi thus perform superset pruning function subset checking returns alse fpmax called recursively lines detailed description fpmax algorithm please refer input instance set mined patterns elite set size output new solution begin atternselection select mined pattern generate partial solution based selected pattern guidedsolutionselect select guiding solution guidedcomplete complete based guiding solution randomcomplete complete random elite set update strategy new improved solution obtained bls algorithm decide whether inserted elite set case adopt following qualitybased strategy inserted two conditions satisfied simultaneously different solution worse solution arg worst solution computational studies fpbs qap computational studies aim evaluate efficiency algorithm purpose first perform detailed performance comparisons two algorithms bls bma whose source codes available furthermore compare four additional algorithms published recently since benchmark instances source code fpmax algorithm publicly available http experimental evaluations qap algorithms usually performed popular benchmark instances ranging instances classified four categories type instances obtained practical qap applications type unstructured randomly generated instances whose distance flow matrices randomly generated based uniform distribution type iii instances generated instances similar qap instances type instances distances distances manhattan distance points grid like consider instances type easy modern qap algorithms optimal solutions found easily short time often less one second experiments focus remaining hard instances type type iii type notice challenging instances single algorithm attain results instances indeed even best performing algorithm misses least two results experimental settings proposed implemented programming language complied gcc flag experiments carried computer equipped intel processor ghz ram operating linux system without using compiler flag running wellknown dimacs machine benchmark procedure machine requires seconds solve benchmark graphs respectively computational results obtained running algorithm parameter settings provided table identify appropriate value given parameter compare performance algorithm different parameter values fixing parameter values example select appropriate frequent pattern set size value provided section table parameter settings algorithm parameter description value tmax max update max iter time limit hours elite set size number times without updating minimum support frequent pattern set size tournament pool size length threshold number iterations parameters bls adopt default values provided given stochastic nature proposed algorithm independently ran times test instance standard practice solving qap assessment based percentage https source code algorithm made available http dfmax ftp deviation metrics widely used literature metrics measures percentage deviation value bkv example best percentage deviation bad average percentage deviation apd worst percentage deviation wpd respectively calculated according bkv bkv corresponds best objective value average objective value worst objective value achieved algorithm smaller xpd value better evaluated algorithm comparison bls bma demonstrate effectiveness algorithm first show detailed comparison two main reference algorithms bls breakout local search bma memetic algorithm experiment based two motivations first bls bma among best performing qap algorithms literature second source codes bls bma available making possible make fair comparision using computing platform stopping conditions third bma use bls underlying optimization procedure comparison allows assess added value data mining component experiment run two reference algorithms two stopping conditions limit tmax minutes hour limit tmax minutes hours allows study behavior compared algorithms short long conditions comparative performances bls bma tmax minutes tmax minutes presented table table respectively two tables report bpd apd wpd values algorithm average time minutes achieve results last row table also indicate average value indicator smaller value better performance algorithm table observe achieves best performance compared algorithms bls bma tmax minutes first achieves values except two cases bls bma fail find values five instances second able reach values two largest instances within given computing time bls fails find values within time limit minutes find values long time limit tmax hours bma performs better bls attaining value still fails bpd value benchmark instances smaller bls bma respectively similar observations also found apd wpd indicators worth noting needs less time achieve better results bls consumes nearly computing time bma table performance comparison proposed algorithm bls bma hard instances tmax minutes number times reaching value runs indicated parentheses bls bma instance bkv bpd apd wpd avg apd wpd long time limit tmax minutes allowed algorithm able achieve even better results see table values obtained higher success rate compared results tmax minutes table average bpd value smallest compared bma bls fpbsqap also achieves smallest average apd value average wpd value computing times requires average minutes reach best solution shortest time among compared algorithms minutes bls minutes bma summary algorithm competes favorably two qap algorithms bls bma terms solution quality computing time computational results demonstrate effectiveness proposed algorithm shows usefulness using frequent patterns mined solutions guide search effective exploration solution space comparison four algorithms extend experimental study comparing fpbsqap four recent qap algorithms literature bpd parallel hybrid algorithm pha used mpi libraries implemented highperformance cluster nodes total ram total disk capacity configured raid node includes cpus cores per cpu ram memory powered great deluge algorithm tmsgd implemented personal computer ghz ram algorithm stopped number fitness evaluations reaches instance size bpd apd wpd parallel algorithm msh implemented high performance cluster pha algorithm breakout local search using openmp implemented openmp api parallel computations runs computers executed personal computer intel core cpu ghz cores ram possible execute logical processors computer one notices three four recent qap algorithms implemented run parallel machines moreover results obtained different computing platforms different stopping conditions comparison shown section provided mainly indicative purposes still comparison provides interesting indications performance proposed algorithm relative algorithms moreover availability code makes possible researchers make fair comparisons see footnote table presents comparative results proposed algorithm four reference algorithms like adopt apd indicator defined section comparative study indicate running time additional indicator interpreted cautions reasons raised completeness also include results bls bma table last row table indicate average value indicator table observe algorithm achieves highly competitive performance compared algorithms average apd value slightly worse parallel pha algorithm better remaining reference algorithms moreover even pha table performance comparison proposed algorithm bls bma hard instances tmax minutes number times reaching value runs indicated parentheses bls bma instance bkv bpd apd wpd avg bpd apd wpd bpd apd wpd table comparative performance algorithm algorithms hard instances terms apd value computational time given minutes indicative purposes tmsgd instance bkv apd apd apd apd apd apd apd avg results bls bma obtained running programs computer tmax minutes results slightly different results reported pha msh parallel algorithms run platforms various stopping conditions run parallel computing platform average computing time minutes almost three times time required minutes obtain similar results note results three instances including two hardest largest instances reported apd value computed remaining instances importantly algorithm requires least time achieve best results average time minutes observations show algorithm highly competitive compared algorithms terms solution quality computing time nalysis discussion section perform additional experiments gain understandings proposed fpbs algorithm including rationale behind solution construction based frequent patterns effectiveness solution construction based mined frequent patterns impact number largest patterns performance proposed algorithm rationale behind solution construction based mined patterns explain rationale behind solution construction based mined frequent patterns analyze structural similarity solutions elite set length distribution frequent patterns mined elite set given two solutions define similarity follows effectiveness solution construction based frequent pattern frequent pattern based solution construction method good alternative general crossover operator length pattern proportion number identical elements total number elements larger pattern length indicates thus shared elements solution similarity defined def considered special case pattern length defined def specifically support value mined pattern equals pattern simplified set common elements shared two solutions confirmed according results reported figure curve maximum solution similarity left exactly curve maximum length right experiment solved benchmark instance time limit tmax minutes analyze solution similarity solutions stored elite set according calculate length distribution set frequent patterns mined elite set according results similarity solutions length distribution mined frequent patterns presented figure figure report maximum value average value minimum value solution similarity pattern length respectively clearly observe high similarity solutions specifically instances maximum solution similarity larger also average solution similarities two solutions larger except average solution similarity significant observation derived based lengths mined patterns showed right high structural similarities solutions provide rationale behind solution construction based mined patterns len pattern length sim set common elements shared larger similarity two solutions common elements share mentioned mined frequent pattern represents set identical elements shared two solutions given minimum support frequent pattern directly converted partial solution thus define length pattern follows solution similarity fig solution similarity solutions left length distribution mined patterns right evolutionary algorithms memetic algorithms demonstrate effectiveness solution construction using frequent patterns compare approach general crossover operator within framework algorithm experiment compared alternative version frequent pattern based solution construction fpbsqap replaced standard uniform crossover operator used ran algorithms benchmark instance times time limit tmax minutes comparative results summarized table table indicates performs better able achieve better bpd value instances except bpd value marginally worse achieved average bpd value also better finally achieves better results terms average apd value average wpd value achieve results average run time slightly shorter minutes explained fact needs execute frequent pattern mining procedure search observations confirms interest solution construction method using mined frequent patterns impact number largest frequent patterns number longest frequent patterns influences diversity new solutions constructed solution construction method using mined frequent patterns investigate impact parameter varied values within reasonable range compared performances box whisker plots showed figure obtained considering ten different values experiments conducted four representative instances selected different families value table comparisons hard instances time limit tmax minutes success rate reaching value runs indicated parentheses bkv bpd apd wpd avg best known value best known value number patterns number patterns best known value best known value number patterns number patterns fig impact number largest frequent patterns box whisker plots corresponding different values terms percentage deviation value figure indicates different values number largest frequent patterns shows performance percentage deviation bestknown value observe performance fpbsqap algorithm strongly depends value except achieves good performance number largest pattern fixed justifies default value shown table tune parameters max update used method chosen max update default values possible improves performance parameters tuned specific problem instance bpd apd wpd also considered alternative version bma removing mutation procedure instance ran algorithm times stopping condition tmax minutes instance onclusions work paper proposed optimization approach called frequent pattern based search fpbs proposed approach relies data mining procedure mine frequent patterns solutions collected search mined patterns used create new starting solutions improvements iterating pattern mining phase optimization phase fpbs designed ensure effective exploration combinatorial search space viability proposed approach verified quadratic assignment problem extensive computational results popular qaplib benchmarks showed fpbs performs remarkably well compared recent algorithms terms solution quality computational efficiency specifically approach able find objective values benchmark instances except within time limit hour hours best knowledge qap algorithms achieve performance furthermore performed additional experiments investigate three key issues proposed fpbs algorithm future work three directions followed first study focused exploring maximal frequent itemsets however interesting patterns available area pattern mining worth studying alternative patterns like sequential patterns graph patterns second fpbs approach would interesting investigate interest solve optimization problems particularly permutation problems linear ordering problem traveling salesman problem subset selection problems diversity dispersion problems critical node problems eferences acan great deluge tabu search hybrid memory support quadratic assignment problem applied soft computing aggarwal bhuiyan hasan frequent pattern mining algorithms survey frequent pattern mining pages springer agrawal swami mining association rules sets items large databases proceedings acm sigmod international conference management data pages new york usa aksan dokeroglu cosar cooperative parallel breakout local search algorithm quadratic assignment problem computers industrial engineering anstreicher brixius goux linderoth solving large quadratic assignment problems computational grids mathematical programming asta selection heuristic search information sciences barbalho rosseti martins plastino hybrid data mining grasp computers operations research benlic hao multilevel memetic approach improving graph ieee transactions evolutionary computation benlic hao breakout local search quadratic assignment problem applied mathematics computation benlic hao memetic search quadratic assignment problem expert systems applications dokeroglu cosar novel multistart algorithm grid quadratic assignment problem engineering applications artificial intelligence drezner hahn taillard recent advances quadratic assignment problem special emphasis instances difficult metaheuristic methods annals operations research duman quadratic assignment problem context printed circuit board assembly process computers operations research grahne zhu efficiently using mining frequent itemsets proceedings icdm workshop frequent itemset mining implementations december grahne zhu fast algorithms frequent itemset mining using ieee transactions knowledge data engineering guerine rosseti plastino extending hybridization metaheuristics data mining dealing sequences intelligent data analysis han cheng xin yan frequent pattern mining current status future directions data mining knowledge discovery han pei kamber data mining concepts techniques elsevier hao memetic algorithms discrete optimization neri cotta moscato eds handbook memetic algorithms studies computational intelligence pages james rego glover multistart tabu search diversification strategies quadratic assignment problem ieee transactions systems man cybernetics part systems humans may jourdan dhaenens talbi using datamining techniques help metaheuristics short survey pages springer berlin heidelberg berlin heidelberg koopmans beckmann assignment problems location economic activities econometrica journal econometric society pages loiola abreu hahn querido survey quadratic assignment problem european journal operational research hao memetic algorithm graph coloring european journal operational research martins vianna rosseti martins plastino making heuristic faster data mining annals operations research pages martins rosseti plastino data mining stochastic local search pages springer plastino barbalho santos fuchshuber martins adaptive versions dmgrasp hybrid metaheuristic journal heuristics plastino fuchshuber martins freitas salhi hybrid data mining metaheuristic problem statistical analysis data mining porumbel hao kuntz evolutionary approach diversity guarantee grouping recombination graph coloring computers operations research raschip croitoru stoffel guiding evolutionary search association rules solving weighted csps proceedings annual conference genetic evolutionary computation pages acm raschip croitoru stoffel using association rules guide evolutionary search solving constraint satisfaction proceedings ieee congress evolutionary computation cec pages ieee reddy govardhan sarma hybridization neighbourhood search metaheuristic data mining technique solve problem international journal computational engineering research ribeiro trindade plastino martins hybridization grasp metaheuristics data mining techniques proceedings first international workshop hybrid metaheuristics valencia spain august pages ribeiro plastino martins hybridization grasp metaheuristic data mining techniques journal mathematical modelling algorithms samorani laguna neighborhood search informs journal computing santos ochi marinho drummond combining evolutionary algorithm data mining solve routing problem neurocomputing santos ribeiro plastino martins hybrid grasp data mining maximum diversity problem pages springer berlin heidelberg santos milagres albuquerque martins plastino hybrid grasp data mining efficient server replication reliable multicast proceedings global telecommunications conference sans francisco usa pages ieee santos martins plastino applications dmgrasp heuristic survey international transactions operational research automatic offline configuration algorithms proceedings companion publication annual conference genetic evolutionary computation pages sevaux mid memetic algorithms population management computers operations research taillard robust taboo search quadratic assignment problem parallel computing tosun performance parallel hybrid algorithms solution quadratic assignment problem engineering applications artificial intelligence umetani exploiting variable associations configure efficient local search set partitioning problems proceedings aaai conference artificial intelligence aaai pages aaai press wauters verbeeck causmaecker berghe boosting metaheuristic search using reinforcement learning hybrid metaheuristics springer berlin heidelberg wauters verbeeck causmaecker berghe optimization approach scheduling journal scheduling springer witten frank data mining practical machine learning tools techniques second edition morgan kaufmann series data management systems morgan kaufmann publishers san francisco usa zhou hao duval reinforcement learning based local search grouping problems case study graph coloring expert systems applications zhou hao duval memetic search maximum diversity problem ieee transactions evolutionary computation
8
divergence entropy information opinionated introduction information theory phil chodrow aug mit operations research center pchodrow https august information theory mathematical theory learning deep connections topics diverse artificial intelligence statistical physics biological evolution many primers topic paint broad picture relatively little mathematical sophistication many others develop specific application areas detail contrast informal notes aim outline elements way thinking cutting rapid interesting path theory foundational concepts theorems aimed practicing systems scientists interested exploring potential connections information theory fields main mathematical prerequisite notes comfort elementary probability including sample spaces conditioning expectations take divergence foundational concept proceed develop entropy mutual information discuss main foundational results including chernoff bounds characterization divergence gibbs theorem data processing inequality recurring theme definitions information theory support natural theorems sound obvious translated english pithily information theory makes common sense since focus notes primarily technical details proofs provided relevant techniques illustrative broader themes otherwise proofs intriguing tangents referenced footnotes notes close highly nonexhaustive list references resources perspectives field contents information theory start entropy introducing divergence entropy conditional entropy information three ways information shrinks reading information theory briefly information theory mathematical theory learning rich connections physics statistics biology methods quantify complexity predictability systems make precise observing one feature system assists predicting features thinking helps structure algorithms describe natural processes draw surprising connections seemingly disparate fields formally information theory subfield probability mathematical study uncertainty randomness distinctive information theory emphasis properties probability distributions independent distributions represented quantities often claim fundamental properties system problem governing complexity learnability original formulation shannon information theory theory communication specifically transmission signal given complexity unreliable channel telephone line corrupted certain amount white noise notes emphasize slightly different role information theory theory learning generally emphasis consistent original formulation since communication problem may viewed problem message receiver learning intent sender based potentially corrupted transmission however emphasis learning allows easily glimpse rich connections information theory disciplines special consideration notes given statistical motivations concepts theoretical statistics design mathematical methods learning data considerations determine learning possible degree close connection physics connections biology cited references start entropy entropy easily concept widest popular currency many expositions theory take entropy starting point however choose different point departure notes derive entropy along way point choice divergence two distributions also called contexts relative entropy relative information free start remainder notes stick divergence though many interesting objects called divergences mathematics discussing confusion divergence well simple reason focus discrete random variables like develop theory wherever possible applies continuous random variables well divergence discrete random variables continuous ones two distributions satisfying certain regularity properties uniquely determined nonnegative real number whether discrete continuous contrast natural definition entropy called differential entropy continuous random variables two bad behaviors first negative undesirable measure uncertainty second arguably worse differential entropy even uniquely defined multiple ways describe continuous distribution example following three distributions gaussian distribution mean variance gaussian distribution mean standard deviation gaussian distribution mean second moment equal technically act switching one descriptions another viewed smooth change space distribution parameters example move first description second changing coordinates applying function regrettably differential entropy invariant coordinate changes change way describe distribution differential entropy changes well undesirable foundations theory independent contingencies describe distributions study divergence passes test discrete continuous cases differential entropy since define entropy terms divergence discrete case start divergence derive entropy along way introducing divergence often said divergence distributions measures surprised think state world measure however idea surprise typically explained made precise motivate divergence start somewhat unusual beginning chernoff bounds makes exact role divergence plays governing surprised ought arise technically idea smooth coordinate change captured diffeomorphisms invertible functions coordinate space whose inverses also smooth possible define alternative notions entropy attempt skirt issues however difficulties https let begin simple running example drawing infinite deck standard playing cards four card suits thirteen card values view sets possible values alphabets alphabet possible suits alphabet possible values let corresponding random variables realization suppose prior belief distribution suits deck uniform belief suits summarized vector extremely convenient view single point probability simplex valid probability distributions definition probability simplex finite alphabet probability simplex set remark helpful remember space missing dimension due constraint equilateral triangle tetrahedron belief would naturally expect drew enough cards observed distribution suits would close could draw infinitely many cards distribution would indeed converge let make precise define distribution suits observe pulling cards important remember random vector changes realization would reasonable expect indeed true almost surely probability according strong law large numbers fact true distribution cards deck happens keep drawing cards observed distribution much different belief would justifiably surprised level surprise quantified probability observing true distribution denote expect become small grows large indeed quite strong result decays exponentially special exponent definition divergence kullbackleibler divergence log using conventions log log theorem chernoff bounds suppose card suits truly distributed according probability observing thought distribution decays exponentially exponent given divergence belief true distribution another way say ignoring factors log minus log average surprise per card drawn chernoff bounds thus provide firm mathematical content idea divergence measures surprise make concrete suppose start belief deck uniform suits belief alphabet unbeknownst removed black cards deck therefore true distribution draw cards record suits surprised suit distribution observe divergence belief true deck distribution theorem states dominating factor probability observing empirical distribution close draws quite surprised indeed state one important properties divergence theorem gibbs inequality holds equality iff words never negative surprise unsurprised observed exactly expected proof lots ways prove gibbs inequality one lagrange multipliers fix like show problem min value value achieved unique point need two gradients gradient respect gradient implicit constraint former log log elementwise quotient vectors recall convention hand vector whose entries unity method lagrange multipliers states seek easy see solution quick check corresponding solution value completes proof remark theorem primary sense behaves like distance simplex hand unlike distance symmetric satisfy triangle let close section noting one many connections divergence classical statistics maximum likelihood estimation foundational method modern statistical practice tools linear regression neural networks may viewed likelihood maximizers divergence allows particularly elegant formulation maximum likelihood estimation likelihood maximization divergence minimization let statistical parameter may multidimensional example context normal distributions may context regression may regression coefficients let probability distribution parameters let sequence observations maximum likelihood estimation encourages find parameter argmax parameter value makes data probable express terms divergence need one piece notation let empirical distribution observations slightly involved algebraic exercise show maximum likelihood estimation problem also written argmin fact related proper distance metric usually called fisher information metric fundamental object study field information geometry amari nagaoka rather nice maximum likelihood estimation consists making parameterized distribution close possible observed data distribution sense entropy put let define shannon entropy think divergence metaphorical distance entropy measures close distribution uniform definition shannon entropy shannon entropy log convenient also use notation refer entropy random variable distributed according theorem shannon entropy related divergence according formula log size alphabet uniform distribution remark formula makes easy remember entropy uniform distribution log number possible choices playing game draw card infinite deck suit card uniform entropy suit distribution therefore log log remark words surprise thought suit distribution uniform found fact relatively unsurprised close uniform indeed gibbs inequality theorem immediately implies assumes largest value log exactly result another hint beautiful geometry divergence operation minimizing measure often called maximum likelihood estimation thus consists kind statistical projection good place note discrete random variables divergence also defined terms entropy technically bregman divergence induced shannon entropy characterized equation intuitively minus approximation loss associated estimating difference entropy using taylor expansion centered somewhat construction turns lead interesting directions statistics machine learning remark theorem provides one useful insight shannon entropy generalize naturally continuous distributions whereas equation involves uniform distribution analogous formula continuous random variables uniform distribution bayesian interpretation entropy construction entropy terms divergence fairly natural use divergence measure close uniform distribution flip sign high entropy distributions uniform add constant term make entropy nonnegative formulation entropy turns another interesting characterization context bayesian prediction bayesian prediction pull single card deck ask provide distribution alphabet representing prediction suit card pulled examples choose certain suite express maximal ignorance guess pull card obtaining sample reward based quality prediction relative outcome based loss function guess give dollars say assume aim encourage report true beliefs deck reward based happened could happened one appropriate loss function turns closely related entropy formally definition loss function proper alphabet random variable argmin remark definition useful think kind side information additional example could telling card pulled red card could influence predictive distribution proper incentive factor predictive distribution may feel course factor loss functions encourage example constant incentive use since guess good proper loss function guarantees maximize payout minimize loss completely accounting available data forming prediction therefore thus proper loss function ensures bayesian prediction game honest definition loss function local function remark function local iff written function prediction much probabilistic weight put event actually occurred events could happened thus proper loss funciton ensures bayesian prediction game somewhat amazingly log loss function given log loss function proper local honest fair affine transformation theorem uniqueness let local proper reward function log constants without loss generality take entropy context occurs expected know distribution suits deck know say proportions deck need formulate predictive distribution theorem implies best guess since additional side information definition entropy bayesian characterization shannon entropy minimal expected loss playing bayesian prediction game true distribution suits remark see definition consistent one saw simply compute expectation log log matches definition second inequality follows fact playing optimally true distribution best predictive distribution conditional entropy true magic probability theory conditional probabilities formalize idea learning represents best belief given know shannon entropy quite interesting information theory really starts becoming useful framework thinking probabilistically formulate conditional entropy encodes idea learning process uncertainty reduction section next need keep track multiple random variables distributions fix notation let distribution discrete random variable alphabet distribution discrete random variable alphabet joint distribution alphabet additionally denote product distribution marginals definition conditional entropy conditional entropy given log remark might seem though ought defined log looks symmetrical however quick think makes clear definition appropriate include information distribution concentrated around informative uninformative values notice values valuable others framework bayesian interpretation entropy conditional entropy expected reward guessing game assuming receive additional side information example consider playing game infinite deck cards recall suit distribution uniform entropy log suppose get side information draw card deck ask guess suit tell color black red since color two possible suits entropy decreases formally suit color easy compute log knowing color reduces uncertainty half conditional entropy somewhat difficult express terms divergence useful relationship unconditional entropy theorem conditional entropy related unconditional entropy entropy distribution remark theorem easy remember looks like get recalling definition conditional probability taking logs indeed take logs compute expectations prove theorem directly another way remember theorem say uncertainty given told equal uncertainty less uncertainty resolved learned theorem quick use gibbs inequality show theorem side information reduces uncertainty knowing never make uncertain less makes sense actually informative ignore theorem implies difference quantifies much reduces uncertainty example natural say carries information encode idea information uncertainty reduction next section information three ways thus far seen two concepts divergence entropy play fundamentals role information theory neither exactly resemble idea information theory earn name brief note end last section suggests think information relationship two variables knowing decreases uncertainty entropy turns idea information information falls motivation remarkably useful one formulated many interesting different ways let start naming difference definition mutual information mutual information mutual information uncertainty reduction associated knowing context bayesian guessing game value told suit color compared play game without information calculations game log log log let express mutual information two ways remarkably follow directly via simple algebra identity gives new way think meaning mutual information theorem mutual information may also written let start unpacking equation divergence actual joint distribution product distribution importantly latter distribution would independent random variables marginals plus gibb inequality implies corrolary random variables independent something like correlation coefficient measures degree statistical correlation stronger correlation coefficient two ways first detects kinds statistical relationships linear ones second correlation coefficient vanish dependent variables never happens mutual information zero mutual information implies dependence period quick illustration hard see suit color numerical value card pulled intuitively playing game offered tell card would rightly annoyed unhelpful uninformative offer suit colors equation another useful consequence since formulation symmetric corrolary mutual information symmetric finally noted briefly end previous section following direct consequence equation gibbs inequality corrolary mutual information nonnegative let unpack equation one way read quantifying danger ignoring available information surprised would ignored information instead kept using belief told deck contained red cards chose ignore continue guessing guess would surprised keep seeing red cards turn draw draw formulation expresses mutual information expected surprise dispose correlation coefficients use instead well correlation coefficients estimated data relatively simply fairly robust error contrast requires reasonably good estimates joint distribution usually available furthermore hard distinguish statistical tests significance would solve problem much complex correlation coefficients would experience ignoring available side information expectation taken possible values side information could assume formulation may seem much opaque turns remarkably useful thinking geometrically expresses mutual information average distance marginal conditionals pursuing thought turns express mutual information something like moment inertia joint distribution information shrinks famous law thermodynamics states closed system entropy increases physicists concept entropy closely related slightly different information theorist concept therefore make direct attack law notes however close analog law gives much flavor formulated information theoretic terms whereas law states entropy grows data processing inequality states information shrinks theorem data processing inequality let random variables let function general possible form data processing inequality right flavor meaning theorem obvious striking intuitively using predict processing reduce predictive power data processing enable tractable computations reduce impact noise observations improve visualizations one thing create information thin air amount processing substitute enough data really want pursue proof data processing inequality steps quite enlightening first need conditional mutual information definition conditional mutual information conditional mutual information given first thought seeing function really spent half hour fruitlessly attempting produce divergence summand naturally written case form expectation mutual informations conditioned specific values conditional mutual information naturally understood value knowing prediction given also already know somewhat surprisingly cases may hold knowing either increase decrease value knowing context predicting theorem chain rule mutual information remark notation refers regular mutual information random variable regard single random variable alphabet proof compute directly dividing sums remembering relations like omitting tedious algebra log log shown always chain rule nice interpretation think estimating first learning end process know therefore information total information splits two pieces information gained learned information gained learned already knowing ready prove data processing proof since function alone given proof borrowed http fact often taken hypothesis data processing inequality rather somewhat weaker sufficient prove result hand using chain rule two ways since argument obtain since nonnegative gibbs inequality conclude shown data processing inequality states absence additional information sources processing leaves less information started law thermodynamics states absence additional energy sources system dynamics leave less order started formulations suggest natural parallel concepts information order therefore natural parallel two theorems close note extremely way think let random variables reflecting possible locations momenta two particles time assume particles interact experimenter placed two particles close similar momenta thus initial configuration system highly ordered reflected knew also significantly reduce uncertainty system evolve time assuming interactions particles evolve separately according dynamics write using data processing inequality twice thus dynamics tend reduce information course complicate picture various ways considering particle interactions external potentials either require sophisticated analysis full law beyond scope notes appropriate considering cases reading introduction far exhaustive heartily encourage interested explore topics detail short list resources found intriguing useful addition cited introduction information theory general shannon original work shannon words one professors important masters thesis shannon entertaining study written english shannon text cover thomas standard modern overview field theorists practitioners colah blog post visual information theory http entertaining extremely helpful getting basic intuition around relationship entropy communication information theory statistics machine learning excellent entertaining introduction topics alreadymentioned mackay want explore likely enjoy csiszar shields would suggest one mackay readers interested pursuing bayesian development entropy much deeply may enjoy bernardo smith provides extremely thorough development decision theory strong perspective notes course information processing learning famous machine learning department excellent accessible find http information theory physics biology marc harper number fun papers views biological evolutionary dynamics learning processes framework information theory harper john baez student blake pollard wrote nice easyreading review article role information concepts biological chemical systems baez pollard generally john baez blog interesting vignettes insights role information plays physical biological worlds https thoroughly connection information dissipation second law thermodynamics see one https references amari nagaoka methods information geometry american mathematical society baez pollard relative entropy biological systems entropy bernardo smith bayesian theory john wiley sons new york cover thomas elements information theory john wiley sons new york csiszar shields information theory statistics tutorial foundations trends communications information theory harper information geometry evolutionary game theory arxiv pages harper replicator equation inference dynamic arxiv pages mackay information theory inference learning algorithms cambridge univeristy press edition shannon mathematical theory communication bell system technical journal shannon prediction entropy printed english bell system technical journal
7
topological dimension gromov boundaries hyperbolic mladen bestvina camille horbez richard wade oct october abstract give upper bounds linear rank topological dimensions gromov boundaries intersection graph free factor graph cyclic splitting graph finitely generated free group contents hyperbolic boundaries free factor graph intersection graph cyclic splitting graph mixing trees indices stratifications carried trees indices stratifications closedness stratum cells dimension existence folding sequences directed boundary point openness set points carried specialization fold control diameter getting finer finer decompositions end proof main theorem equivalence classes vertices specializations introduction curve graph orientable hyperbolic surface finite type essential tool study mapping class group proved gromov hyperbolic gromov boundary identified space ending laminations klarreich striking application curve graph geometry infinity recent proof mod finite asymptotic dimension implies turn satisfies integral novikov conjecture crucial ingredient proof finite asymptotic dimension curve graph first proved using minsky tight geodesics recently recovered bound linear genus number punctures via finite capacity dimension gromov boundary latter approach builds upon work gabai bounded topological dimension building covers terms surface bounding capacity dimension requires getting metric control covers importance curve graph study mapping class groups led people look hyperbolic graphs present paper mainly interested three free factor graph cyclic splitting graph intersection graph gromov boundary graph homeomorphic quotient subspace boundary outer space dimension equal quotient maps bounds cohomological dimension gromov boundaries priori imply finiteness topological dimensions goal present paper main theorem boundary topological dimension boundary topological dimension boundary topological dimension know whether equality holds cases see open questions end introduction following gabai work proof relies constructing decomposition gromov boundary terms notion hope getting control covers construct approach may pave way towards proof finite asymptotic dimension via finite capacity dimension gromov boundaries rest introduction devoted explaining strategy proof although treat cases paper mainly focus free factor graph introduction simplicity say word proof works proof similar bounding topological dimension establish main theorem use following two topological facts lemma proposition let separable metric space written finite union dim proved appealing following fact exists countable cover closed subsets dim example one recover fact dim two facts using decomposition set points exactly rational coordinates proved using second point decomposing countably many closed subsets xij two points set precisely rational coordinates one easily checks sets xij finding arbitrarily small boxes around point xij boundary outside xij notice decomposition hereafter stratification finitely many subsets first point provided map hereafter index map showing amounts proving every point clopen neighborhoods within arbitrary small diameter equivalently every point arbitrary small open neighborhoods empty boundary actually thanks second point enough write stratum countable union closed subsets called hereafter cell decomposition prove cell decomposition stratification cell decomposition boundary separable metric space equipped visual metric points boundary represented first reasonable attempt could define stratification using index map similar one introduced roughly counts orbits branch points directions points trees although make use features definition stratification slightly different based notion consists triple free simplicial tree respectively equivalence relation set vertices respectively directions say tree carried map called carrying map sense compatible structure typical situation identifies two vertices identifies germs two directions based equivalent vertices directions technical reasons general definition carrying map slightly weaker particular bit flexible possible images vertices three equivalence classes directions point equivalence class trees admit bijections one another carried equivalently representative carried index combinatorial datum mainly counts orbits equivalence classes vertices directions index point defined maximal index carries determines cell defined set points carries define stratification letting collection points index covered countable collection cells varies index view topological facts recalled main theorem follows following points proposition boundary contained union cells strictly greater index implies closed proposition cell dimension say dimension equal exclude possibility empty first point show converges carrying maps representatives converge map representative however limiting map may longer carrying map example inequivalent vertices may image limit edges may collapsed point collapse edges map point get induced map collapse determines new structure combinatorial argument enables count number directions lost passing show unless carried proof second point relies cell decomposition process similar gabai used splitting sequences surfaces get finer finer covers space ending laminations context starting cell construct finer finer decompositions clopen subsets means folding sequences eventually subsets decomposition small enough diameter precisely starting one resolve illegal turn folding several possibilities folded track see figures section operation yields subdivision various first crucial fact prove open folding sufficiently long reach diam hyperbolicity crucial folding sequence determine unparameterized going infinity towards set defined obtained time process contained set endpoints geodesic rays starting simplicial tree associated passing bounded distance definition visual metric boundary implies diameter converges move along folding path find open neighborhood diam boundary empty contains points strictly greater index get dim required word points represented trees free factor system elliptic leads work allowed bigger vertex stabilizers index also takes account complexity elliptic free factor system priori definition index setting yields quadratic bound topological dimension get linear bound cohomological dimension also deal fact point equivalence class trees may admit bijections different representatives leads analysing preferred mixing representatives classes closely organization paper section review definitions descriptions gromov boundaries also prove facts concerning mixing representatives points used tackle last difficulty mentioned paragraph indices defined section stratifications gromov boundaries given prove section cell closed stratum showing made points strictly higher index introduce folding moves section use prove cell dimension proof main theorem completed section paper appendix illustrate motivations behind technical requirements appear definitions traintracks carrying maps open questions mentioned earlier hope cell decompositions boundaries defined present paper may provide tool tackle question finiteness asymptotic dimension various graphs following blueprint gabai uses cell decomposition ending lamination space show highly connected sufficiently complicated surfaces connectivity established earlier surfaces unknown whether boundaries connected question local connectivity boundaries related question local connectivity boundary outer space also open knowledge finally would like address question finding lower bounds dimensions gabai shows topological dimension bounded genus surface number punctures since ending lamination space surface sits subspace gives lower bound dimension improving gap upper lower bounds well finding lower bound interesting problem acknowledgments would like thank patrick reynolds conversations related present project vera pointing reference relating topological cohomological dimensions present work supported national science foundation grants authors residence mathematical sciences research institute berkeley california fall semester first author supported nsf grant hyperbolic boundaries review definitions three hyperbolic free factor graph intersection graph graph descriptions gromov boundaries novelties facts concerning mixing trees final subsection free factor graph free factor graph graph equipped simplicial metric whose vertices conjugacy classes proper free factors two conjugacy classes free factors joined edge representatives hyperbolicity proved describe gromov boundary first recall unprojectivized outer space cvn space isometry classes minimal free simplicial isometric simplicial metric trees closure equivariant hausdorff topology identified space minimal small stabilizers nondegenerate arcs cyclic possibly trivial tripod stabilizers trivial exists coarsely map cvn sends tree free factor elliptic tree obtained collapsing edges points arational proper free factor elliptic acts dense orbits minimal subtree two arational trees equivalent exists bijection denote subspace made arational trees arational trees introduced reynolds proved every arational either free dual arational measured lamination surface theorem homeomorphism extends map continuously boundary mean sequences cvn converging tree topology sequence converges topology intersection graph conjugacy class geometric either part free basis else corresponds boundary curve surface fundamental group identified intersection graph mann definition variation bipartite graph whose vertices simplicial trees together set geometric conjugacy classes tree joined edge conjugacy class whenever elliptic hyperbolicity proved intersection graph also graph section denote fat space free arational trees coarsely map cvn sends tree cvn tree obtained collapsing edges points theorem homeomorphism fat extends map continuously boundary continuity extension understood way statement theorem cyclic splitting graph cyclic splitting simplicial minimal edge stabilizers cyclic possibly trivial graph graph whose vertices homeomorphism classes two splittings joined edge common refinement hyperbolicity proved mann recall two trees compatible exists tree admits maps onto tree compatible tree compatible denote subspace made trees two trees equivalent compatible common tree although obvious shown equivalence relation denote map cvn given forgetting metric theorem horbez homeomorphism extends map continuously boundary class preferred representatives mixing recall tree mixing segments exists finite set contained union finitely many translates theorem horbez every contains mixing tree two mixing trees admit bijections tree admits map onto every mixing tree space arational trees contained space trees equivalence relation defined restriction equivalence relation arational trees mixing inclusion induces subspace inclusion mixing trees section establish facts concerning mixing trees possible point stabilizers building work concerned free factor graph would need results reader may decide skim section avoid technicalities proofs first reading lemma let mixing let proper free factor minimal subtree discrete possibly reduced point particular proper free factor elliptic arational proof assume towards contradiction simplicial since trivial arc stabilizers levitt decomposition trivial arc stabilizers may reduced point case dense orbits let free factor hence vertex group decomposition subtree also subtree dense orbits lemma family gtb transverse family mixing transverse covering addition corollary stabilizer equal proposition stabilizer subtree transverse covering mixing tree free factor fact elliptic get contradiction collection subgroups free factor system coincides set nontrivial point stabilizers simplicial trivial arc stabilizers natural order collection free factor systems saying free factor system contained free factor system whenever every factor contained one factors proposition let mixing either dual arational measured lamination closed hyperbolic surface finitely many points removed else collection point stabilizers free factor system proof first assume exist free splitting point stabilizers elliptic terminology section tree surface type argument lemma since skeleton dynamical decomposition reduced point words dual arational measured lamination surface assume exists free splitting point stabilizers elliptic let smallest free factor system every point stabilizer contained within factor lemma factor acts discretely minimal subtree addition trivial arc stabilizers trivial arc stabilizers either elliptic else free splitting second situation occur otherwise point stabilizers would form free factor system contradicting minimality hence coincides collection point stabilizers also establish characterization mixing representatives given equivalence class trees start following lemma lemma let assume dense orbits exists map either bijection else exists elliptic latter case point stabilizer proof notice surjectivity follows minimality collection subtrees form transverse family bijection one subtrees family nondegenerate proposition implies stabilizer stab nontrivial discrete implies minimal subtree reduced point contains element acts hyperbolically fixes point addition cyclic last assertion lemma holds proposition tree mixing every element elliptic also elliptic dual arational lamination surface trees mixing proof mixing trees admit alignmentpreserving map onto every element elliptic also elliptic mixing admits map onto mixing tree bijection lemma exists element elliptic second assertion proposition follows last assertion lemma dual arational lamination surface mixing point stabilizers cyclic corollary let tree let maximal free factor system elliptic let mixing representative maximal elliptic free factor system mixing proof mixing point stabilizers conclusion obvious mixing proposition implies dual lamination surface exists element contained elliptic proposition shows collection elliptic subgroups free factor system free factor system strictly contains indices stratifications carried trees definition data minimal simplicial trivial edge stabilizers equivalence relation set vertices two adjacent vertices equivalent equivalence relation set directions vertices two directions equivalent directions also equivalent equivalence classes directions called gates denote free factor system made point stabilizers remark including equivalence relation vertex set definition may look surprising reader standard theory free groups roughly speaking equivalence classes vertices correspond branch points trees carried track explained detail appendix paper usually also impose work satisfy additional assumptions let pair directions based common vertex called turn said legal inequivalent subtree crosses turn intersection directions say legal turn crossed legal definition admissible admissible every vertex exist three pairwise inequivalent directions exists element acting hyperbolically whose axis legal crosses turn mod particular least three gates every equivalence class vertices admissible call triple definition tripod legal elements say axes form tripod legal axes edges simplicial trees given affine structure enables consider maps simplicial trees linear edges linear unique metric isometric restricted every edge say metric determined linear map naturally arise morphisms trees suppose map simplicial linear edges collapse edge point induced equivalence relation set vertices given saying two vertices equivalent let point let associated equivalence class soon equivariance implies setwise stabilizer stab equal stabilizer point let set directions based points since collapse edge induces equivalence relation two directions equivalent germs map direction set stab equivalence classes maps injectively set stab directions call collection equivalence classes structure induced typical example tree carried general definition carrying slightly technical involves bit flexibility respect definition exceptional classes vertices defined follows definition exceptional classes vertices let admissible equivalence class vertices called exceptional exactly gates definition specialization let two let exceptional equivalence class vertices say specialization exists vertex orbit coarsest equivalence relation finer two directions based vertex orbit every direction vertex equivalent direction vertex definition carrying say carried map linear edges collapse edge point structure induced obtained finite possibly trivial sequence specializations situation call map carrying map respect remark motivation introducing specializations allowing definition carrying explained appendix specializations appear naturally later paper start performing folds tracks lemma collection exists tree carried countable proof countably many minimal simplicial trivial edge stabilizers addition carries tree stabilizer every equivalence class vertices finitely generated subgroup every point stabilizer small finitely generated set finite finitely many vertices two vertices stab stabilizer every gate cyclic finite equivalence relation vertices recovered taking finite set representatives equivalence classes taking finite set vertices stab equivalence class form gstab equivalence relation edges recovered taking finite set representatives gates taking finite set representatives oriented edges determining directions gate form gstab hence determined simplicial tree finite family stab stab gives countable number possible traintracks also need following observation lemma let admissible let legal segment exists element acting hyperbolically whose axis legal contains proof since admissible exist elements acting hyperbolically whose axes legal pass respectively meet one extremity subtree legal standard properties group actions trees imply turns axis contained translates axis legal contains following two lemmas state every tree carried admissible addition carrying map completely determined structure given free factor system say tree subgroups elliptic lemma let free factor system let trivial arc stabilizers exists admissible carried proof without loss generality assume maximal free factor system let replacing subgroups appropriate conjugates choose universal cover graph groups depicted figure unique map linear edges indeed every vertex nontrivial stabilizer must sent unique point fixed elements subgroups fix point maximality collapse edge therefore admissibility checked taking advantage fact vertex groups infinite one construct required legal elements taking appropriate products elliptic elements assume let free basis acts freely discrete orbits elliptic basis exists reducing factors form bounded subset see corollary let minimal subtree let finite graph one two vertices extend figure tree proof lemma case marked number vertices attaching loops labelled elements fixed vertex let universal cover isometric embedding extends map collapse edges element elliptic induced admissible vertex valence least isometric embedding restricted lemma let let admissible carried exists unique carrying map structure furthermore carrying map varies continuously set trees carried equivariant topology proof prove uniqueness enough show vertex completely determined structure let tripod legal elements let three edges taken axes isometric restricted also restricted axes implies intersection axes reduced point must send point continuity also follows argument order define means equivalence class trees carried need following lemma lemma let let bijection carried carried proof let carrying map let bijection let unique linear map coincides vertices claim carrying map indeed since bijection two vertices addition since preserves alignment germs two directions identified identified recall denotes subspace made trees definition carrying equivalence classes trees equivalence class carried equivalently mixing representative carried indices stratifications define stratifications gromov boundaries means index function taking finitely many values define index index tree explain two related defining index boundary point equivalence class trees geometric index tree following definition reminiscent index trees slight difference constants use definition geometric index tree let tree trivial arc stabilizers geometric index defined igeom stab denotes set branch points denotes number orbits directions lemma trivial arc stabilizers igeom arational igeom free arational igeom proof let igl stab igeom igl proved theorem igl igeom addition igeom igl rank stab assume arational free since least one orbit branch points get equation igeom igl otherwise dual measured lamination surface point stabilizers either trivial cyclic get igeom igl index carrying index tree height free factor system defined maximal length proper chain free factor systems recall free factor system associated free factor system consisting vertex stabilizers associated tree definition index geometric index defined igeom sum taken finite set representatives equivalence classes denotes number stab gates vertices denotes rank stab index defined pair igeom indices ordered lexicographically remark admissible every equivalence class vertices nonnegative contribution geometric index contribution class geometric index zero stab trivial exactly three gates precisely exceptional lemma let tree trivial arc stabilizers let admissible carries igeom igeom proof let unique carrying map since admissible vertices mapped branch points carrying map distinct equivalence classes nonexceptional vertices mapped distinct branch points equivalence class mapping branch point stab stab furthermore one checks two gates based distinct stab mapped directions distinct stab since exceptional vertices contribute geometric index follows igeom igeom notice inequality lemma might strict branch direction tree visible track following definition instead directly counting branch points branch directions count maximal number directions visible definition carrying index tree let tree trivial arc stabilizers define carrying index denoted maximal index admissible carries ideal carrier admissible carries remark view lemma ideal carrier tree maximal free factor system elliptic lemma index tree trivial arc stabilizers take boundedly many values bound depending index arational tree comprised index free arational tree comprised proof first assertion consequence proposition together fact bound depending height free factor system assertions follow proposition nontrivial free factor elliptic arational tree height carries arational tree index boundary point stratifications define index equivalence class trees recall definition carries carries mixing representatives definition index boundary point index equivalence class defined maximal index admissible carries equivalently carrying index mixing representatives given admissible define cell subspace made classes carried define stratum set points gromov boundary written union strata varies finite set possible indices mixing trees tree mixing proper free factor elliptic lemma says arational therefore boundary coincides subspace union strata comprised finally boundary written union subspace made equivalence classes free actions view topological facts recalled introduction left showing cell closed stratum dim contents sections respectively give complete overview proof main theorem section closedness stratum general property tree carried closed condition however cells determined closed strata goal present section boundary study metrizable throughout paper use sequential arguments work topology appropriate proposition let admissible points index strictly greater particular closed proof proposition based lemma proposition lemma let admissible let sequence mixing trees carried assume trees converge tree denote either mixing maximal free factor system elliptic else proof free factor system elliptic trees therefore also elliptic maximal free factor system elliptic collapses mixing representative free factor system maximally elliptic remark implies mixing possibly maximally elliptic corollary implies maximally elliptic mixing representative hence case also complete proof proposition thus left understanding case limiting tree mixing maximal elliptic free factor system done proposition idea carrying maps always converge map general limiting map fail carrying map may even collapse edges points case prove proposition jump index start proving existence limiting map lemma let admissible let sequence trees trivial arc stabilizers converging tree assume trees carried let carrying map maps converge equivariant topology map proof let vertex since admissible exist three inequivalent edges form legal tripod two elements act hyperbolically whose axes legal axis resp crosses turn resp axes intersect compact segment initial point replacing inverses assume translate along segment direction going since maps carrying maps elements hyperbolic trees axes intersect compact nondegenerate segment translate direction limiting tree intersection atg ath axes fixed sets still compact segment elements hyperbolic still translate direction along intersection define initial point atg ath repeat process orbit vertices obtain map vertices extend map linearly edges distances intersections axes well initial points determined topology follows two vertices distance dtn converges goes infinity implies sequence maps converges proposition let admissible let sequence mixing trees carried assume converges mixing tree maximal free factor system elliptic let carrying map let limit maps let tree obtained collapsing edges whose reduced point let induced map let induced admissible either igeom igeom else obtained finite possibly trivial sequence specializations hence carries proving proposition first complete proof proposition facts proof proposition let carried inequality strict otherwise would ideal carrier contradicting assume carried let sequence converging let mixing representative equivalence class since projectively compact subsequence assume converges tree since boundary map closed tree using lemma proof reduces case mixing maximal free factor system elliptic since carried proposition shows carried satisfying igeom igeom implies turn rest section devoted proof proposition proof proposition track admissible proving admissible first observe hyperbolic elements still hyperbolic indeed view proposition assumption maximal elliptic free factor system elliptic contained free factor conjugacy class given power boundary curve hence contained proper free factor relative therefore axis crosses orbits edges collapsed point collapse map prove admissible let let first observe bounded subtree indeed otherwise would find two oriented edges say pointing direction would imply hyperbolic contradiction let maximal legal subtree using fact admissible find three pairwise inequivalent edges lying outside based extremal vertices vertices necessarily distinct subtree legal particular edge collapsed point since admissible lemma gives three elements act hyperbolically whose axes legal axis crosses segment mod map preserves alignment restricted axes limit also preserves alignment restricted axes implies legal addition axes form legal tripod admissible controlling index prove igeom igeom unless carries given vertex first establish contribution equivalence class index less sum contributions index let let set vertices mapped equivalent vertices mapped point set union equivalence classes vertices stab stab two vertices stab hence may pick finite set representatives stab equivalence classes vertices suppose images equivalence classes correspond orbits points chosen independent passing appropriate subsequence reordering possibly passing subsequence may assume mapped distinct stab points exceptional classes notice classes might exceptional well suppose equivalence class gates stabilizer rank exceptional classes contribute geometric index amount contribute geometric index let equivalence class corresponding image contributes index ispthe number orbits gates rank stab define difference index contribution index contribution goal control number stab gates lost passing let subtree spanned points since finitely many stab equivalence classes vertices tree obtained stab invariant subtree attaching finitely many orbits finite trees stab finite subtree recall assumed maximal elliptic free factor system using proposition implies stab either cyclic contained may trivial second case elliptic hence quotient finite graph rank marked points corresponding images claims oriented edge collapsed gate corresponding mapped direction point oriented edges determine inequivalent directions based vertices stab nondegenerate equivalent sufficiently large one directions mapped direction point proof claims first claim collapsed endpoints lie arc two points second claim equivalent nondegenerate sufficiently large intersection contains nondegenerate segment equivalent based distinct equivalence classes vertices stab let call arc connecting contained tree covered union two arcs corresponding starting starting one check initial direction either must contained since trivial arc stabilizers two oriented edges orbit equivalent therefore finitely many orbits pairs equivalent directions therefore choose large enough conclusion second claim holds pairs inequivalent directions based vertices stab whose nondegenerate equivalent denote gnout resp gnin set gates based vertices stab mapped outside resp inside claim implies map set enout edges gnout set gates claim implies two edges enout distinct gates distinct furthermore definition implies natural map induced gnin set dir directions based marked points quotient graph injective summing two inequalities follows bounded number directions based marked points quotient graph distinguish three cases notice passing subsequence assume one occurs case stabilizer stab either trivial fixes point trees vertex happens particular stab contained case graph rank marked points include leaves indeed every leaf either projection point else projection unique point stabilizer equal stab latter case assumption apply following fact graph fact finite connected graph rank marked points containing leaves directions marked points proof fact euler characteristic argument shows finite connected graph vertices exactly directions vertices viewing vertex set set marked points removing vertex set loses least two directions fact follows fact applied quotient graph shows see new equivalence class contributes least geometric index sum contributions previous equivalence classes index shows index increases soon left case remaining situation graph single point possibly one say equivalence classes exceptional claim edges corresponding directions collapsed furthermore claim distinct equivalence classes gates based vertices orbit mapped distinct gates follows either gates equivalence class passing index increases gates corresponding directions mapped bijectively exceptional class corresponds specialization otherwise would see extra gates repeating argument across equivalence classes find either geometric index greater obtained applying finite number specializations latter case implies carried case group stab elliptic particular stab cyclic case graph circle finitely many finite trees attached whose leaves projections points particular rank leaves marked applying fact thus get since get new equivalence class contributes least geometric index sum contributions previous equivalence classes index case stabilizer stab cyclic fixes point trees vertex mapped case graph rank image quotient graph might marked however set marked points includes leaves use following variation fact easily proved first including missing leaf set marked points fact suppose finite connected graph rank set marked points containing leaves except possibly one directions marked points fact applied graph shows since get new equivalence class contributes least geometric index sum contributions previous equivalence classes index done cells dimension goal section prove following fact proposition let admissible dimension strategy proof proposition following given point goal construct arbitrarily small open neighborhoods empty boundary done using folding sequences section defined notion specialization gives new traintrack section introduce operations called folding moves enable define new folding illegal turn make following definition definition folding sequence folding sequence traintracks infinite sequence admissible obtained applying folding move followed finite possibly trivial sequence specializations given say folding sequence directed first prove section folding sequences exist lemma let admissible let exists folding sequence directed obtained finite possibly trivial sequence specializations prove section sets folding sequence traintracks open showing following two facts lemma let admissible let specialization admissible open subset lemma let admissible let obtained folding illegal turn admissible open subset words sets open neighborhoods closed boundaries made trees higher index view proposition complete proof proposition left showing made arbitrary small proved section form following proposition lemma let admissible let let folding sequence directed diameter converges sum proof proposition four lemmas proof proposition assume let let let folding sequence directed provided lemma obtained finite possibly trivial sequence specializations lemma shows find contains diameter iterative application lemmas ensure open neighborhood furthermore boundary empty made trees higher index proposition point arbitrarily small open neighborhoods empty boundary dim existence folding sequences directed boundary point specializations notion specialization given definition lemma let admissible let specialization admissible proof admissibility clear underlying tree collection legal turns vertex unchanged specialization see notice first exceptional vertices contribute geometric index stabilizers increase definition specialization otherwise two inequivalent vertices orbit would become equivalent new gate created igeom igeom since stabilizers equivalence classes vertices folding moves introduce three types folding moves discuss properties singular fold figure singular fold two black vertices equivalent colors give gates equivalence class vertices define definition singular fold see figure let let two edges based common vertex determine equivalent directions obtained singular fold map consists equivariantly identifying vertices directions based equivalent vertices remark note resp direction resp pointing towards one perform singular fold well equivalent indeed first condition definition implies last condition implies definition given admissible denote subspace made trees dense orbits ideal carrier lemma let admissible let two edges form illegal turn directions pointing towards let obtained applying singular fold admissible proof let folding map given definition numbers stabilizers equivalence classes vertices gates unchanged check admissibility let vertex exists partial fold figure partial fold tripod legal axes tripod legal axes sends legal turns legal turns finally show let carrying map since extremities equivalent two edges mapped segment follows carrying map factors fold attain map one check induced obtained finite possibly trivial sequence specializations using sequence specializations passing carried equality indices shows also ideal carrier follows conversely ideal carrier carrying map composition obtained finite possibly trivial sequence specializations follows definition partial fold see figure let let illegal turn vertex obtained partial fold map consists equivariantly identifying proper initial segment proper initial segment vertex trivalent vertices unique directions based vertices vertex equivalent vertex directions pairwise inequivalent remark last condition definition implies new vertex exceptional lemma let admissible let obtained partial fold admissible proof condition definition admissibility easy check vertex orbit notations definition let three directions containing respectively let considered modulo construct legal element whose axis crosses turn argue proof lemma one first finds two elements legal axes respectively axes subtree spanned contains turn legal element legal element crosses turn required see first note stabilizer equivalence class corresponding new vertex trivial stabilizers equivalence classes vertices unchanged passing addition folding process number nonexceptional equivalence classes vertices along ranks stabilizers number associated gates remains new exceptional trivalent vertex contributes index hence igeom igeom therefore also lemma let admissible let carrying map assume exist two edges form illegal turn smaller let obtained applying partial fold proof recall equipped metric induced map edge length let simplicial tree obtained equivariantly identifying proper initial segment length proper initial segment length let folding map first claim vertex trivalent indeed otherwise vertex would infinite stabilizer would maximal free factor system elliptic contradiction thus map factors map since enough prove obtained finite sequence specializations definition obtained finite sequence specializations let partial fold notice obtained sequence specializations passing new trivalent vertex equal vertex done otherwise show specialization lemma follow indeed equal another vertex first observe always find orbit identified vertices orbit equivalence class nontrivial stabilizer contributes positively geometric index contradicting full fold figure full fold new direction identified direction initial edge general direction may also based vertex long fact ideal carrier assume orbit next observe germs directions identified germs directions vertices otherwise would creating new gate passing increasing index specialization definition full fold see figure let let illegal turn vertex denote extremities let direction based vertex obtained full fold special gate map consists equivariantly identifying proper initial segment vertices directions based vertices direction based pointing towards number equivalence classes vertices remains unchanged full fold vertex stabilizers vertex stabilizers fold creates new gates index remains unchanged admissible also admissible hence lemma let admissible let obtained full fold admissible lemma let admissible let carrying map let two edges form illegal turn assume also assume direction whose initial edge denote vertex let obtained fully folding special gate proof since map factors fold reach map equation ensures identifies germs obtained finite sequence specializations passing know addition lemma implies corollary let admissible let carrying map let two edges form illegal turn assume also assume carried specialization exists obtained fully folding proof let underlying simplicial tree full fold since map factors map equality indices shows existence direction equation lemma holds otherwise fold would create new gate would carried higher index direction based vertex satisfying since carried specialization implies therefore lemma implies full fold special gate folding sequences proof lemma given two say obtained applying folding move obtained applying either singular fold partial fold full fold define folding sequence definition recall sequence obtained applying folding move followed finite possibly trivial sequence specializations goal present section prove lemma actually prove slightly stronger version given lemma main fact use proof following lemma let admissible let exists obtained either specialization applying folding move proof assume carried obtained performing specialization let carrying map let two edges form illegal turn exists simplicial definition otherwise would carried specialization case identifies germs directions pointing towards lemma implies singular fold smaller lemma implies carried partial fold finally vice versa corollary implies carried full fold also make following observation lemma let admissible let obtained applying either specialization folding move hence proof obtained applying specialization conclusion follows definition carried together fact lemma obtained applying folding move conclusion follows fact lemmas together observation fold map carrying map also carrying map position prove following stronger version lemma lemma let let finite sequence admissible traintracks obtained applying folding move followed finite possibly trivial sequence specializations exists folding sequence directed obtained finite possibly trivial sequence specializations proof lemma shows conclusion obtained iteratively applying lemma mixing representative class starting noticing perform finitely many specializations row openness set points carried specialization fold trees equivalence classes trees first reduce proofs lemmas analogous versions trees lemma let admissible open subset open subset proof conclusion obvious empty case also empty assume otherwise since let let sequence converging wish prove sufficiently large may use sequential arguments separable metric space let mixing representative since projectively compact passing subsequence assume converges tree closedness boundary map shows representative class since elliptic tree also elliptic mixing corollary implies largest free factor system elliptic mixing representatives contradicting mixing fact thus implies openness shows sufficiently large since mixing representative precisely means sufficiently large desired specializations section prove lemma precisely prove analogous version trees lemma follows thanks lemma lemma let admissible let specialization open subset proof lemma let let sequence trees converges aim show carried sufficiently large imply equality indices let class vertices specialization occurs let vertex identified let three edges adjacent vertices class determining distinct gates let edges based vertices identified aim show sufficiently large vertices identified carrying map initial segments intersection carrying map nondegenerate since carrying map varies continuously carried tree lemma sufficiently large image nondegenerate intersection since three directions determined inequivalent images form tripod least two images would nondegenerate intersection containing contradiction three edges inequivalent since nondegenerate intersection follows carried folds lemma follows following proposition together lemma proposition let admissible let obtained folding illegal turn open proof lemma obtained applying singular fold lemma ensures conclusion holds denoting carrying map thus assume using notations previous sections let endpoints branch points admissible lengths segments determined finite set translation lengths obtained partial fold strictly less using fact carrying map varies continuously carried tree lemma see property remains true trees neighborhood lemma implies trees neighborhood carried assume obtained full fold fully folded let endpoint edge corresponding direction based vertex equivalent special gate open conditions therefore lemma gives existence open neighborhood trees carried full fold control diameter getting finer finer decompositions goal present section prove lemma key lemma following lemma let let folding sequence directed converges topology proof let mixing representative find sequence simplicial metric trivial edge stabilizers simplicial tree obtained forgetting metric unique carrying map isometric edges addition natural morphisms fij fik fjk fij sequence converges tree indeed sequence converges addition tree reduced point legal structure induced morphisms stabilize goes legal element exists admissible every legal turn also legal structure induced morphisms become elliptic limit taking limit maps get map isometric restricted every edge show implies convergence topology continuity statement boundary map given theorem first claim tree simplicial indeed assume towards simplicial tree obtained diction let adding vertices subdivide edges subdivide edges since trivial arc stabilizers every edge mapped edge never identifies two edges orbit folding process lower bound difference volume volume therefore folding process stop contradicting fact folding sequence infinite shows simplicial assume towards contradiction equivalent exists map metric completion obtained taking limit maps fnm goes infinity see construction theorem therefore dense orbits map proposition implies equivalent theorem therefore remains prove dense orbits assume towards contradiction since limit free simplicial admit maps onto trivial arc stabilizers since simplicial free factor acting dense orbits minimal subtree lemma translates reduced point form transverse family contradiction stabilizer subtree transverse family mixing tree free factor fact elliptic proposition proof lemma let let mixing representative lemma exists folding sequence directed obtained finite possibly trivial sequence specializations trees lie image optimal liberal folding path converges lemma shows ray point passes within bounded distance converges lemma follows definition visual distance diameter converges end proof main theorem sum arguments previous sections complete proof main theorem proof main theorem start case gromov boundary separable metric space equipped visual metric written union strata made boundary points index take finitely many values stratum union sets varies collection index collection traintracks nonempty countable lemma proposition cell closed stratum proposition dimension countable union closed subsets countable union theorem lemma since union finitely many subsets dimension union theorem proposition implies finite topological dimension bounded number strata minus particular topological dimension equal cohomological dimension see discussion gives desired bound cohomological dimension bounded corollary proved using existence map subset whose topological dimension equal applying shown section gromov boundary equal union strata comprised therefore argument directly shows topological dimension without appealing cohomological dimension finally gromov boundary equal union strata comprised stratum subspace stratum given index let subspace set dimension addition closed stratum boundary made points higher index step step step figure following three steps may repeat infinitely often along folding sequence showing homeomorphism type underlying simplicial tree might never stabilize proposition argument thus shows topological dimension equivalence classes vertices specializations appendix would like illustrate reason needed introduce equivalence relation vertex set definition definition allow specializations definition carrying definition equivalence classes vertices definition world surfaces one defines splitting sequence towards arational foliation homeomorphism type complement traintrack eventually stabilizes singularity limiting foliation determined complementary region track visible prongs determine index singular point contrary folding sequence towards arational tree defined present paper number vertices preimage branch point well number directions vertices may never stabilize illustrated following situation see figure point folding sequence one may perform singular fold depicted side figure step declared two identified vertices equivalent computing index operation would resulted drop index situation could occur finitely many times along folding sequence could defined stable index along folding sequence might true general indeed later along folding sequence new trivalent vertex created due partial fold step figure exceptional vertex may declared equivalent another vertex track applying specialization results possibility performing new singular fold later identifying one could hope eventually singular folds folding sequence involve exceptional vertex would cause trouble far index concerned even may fail true general indeed might happen later process full fold involving creates fourth direction red direction step figure equivalent direction instead equivalent direction another vertex class introducing equivalence class set vertices definition ensures index remains constant along folding path prevents overcounting number branch points limiting tree counting branch points introducing specializations definition carrying would finally like give word motivation necessity introduce specializations definition carrying instead saying carries tree map already observed folding sequence directed partial folds create new trivalent vertices happen newly created vertex gets mapped point another vertex could tried case perform partial fold specialization time words declare several distinct partial folds including one denoted declared equivalent vertex second denoted declared equivalent would infinitely many partial folds identified possible vertex approach difficulty comes proving openness set within indeed certain point carrying map may identify new trivalent vertex vertex hence carried nearby trees topology carrying map identifies vertex going away goes infinity hence tree carried without extra flexibility definition carrying would lead open justifies definition carrying single way performing partial fold turn specialization performed later along folding sequence needed references bestvina bromberg asymptotic dimension curve complex bestvina bromberg fujiwara constructing group actions quasitrees applications mapping class groups publ math bestvina feighn outer limits preprint hyperbolicity complex free factors adv maths bestvina reynolds boundary complex free factors duke math cohen lustig small group actions dehn twist automorphisms topology culler morgan group actions proc london math soc dowdall taylor graph geometry hyperbolic free group extensions engelking dimension theory north holland publishing company gabai almost filling laminations connectivity ending lamination space geom topol gaboriau levitt rank actions ann scient norm sup boundary free factor graph free splitting graph horbez boundary outer space free product appear israel math hyperbolic graphs free products gromov boundary graph cyclic splittings appear topol spectral rigidity primitive elements group theory kapovich lustig geometric intersection number analogues curve complex free groups geom topol klarreich boundary infinity curve complex relative space preprint leininger schleimer connectivity space ending laminations duke math levitt graphs actions comment math helv mann hyperbolicity cyclic splitting graph geom dedic hyperbolic nonunique ergodicity small thesis university utah masur minsky geometry complex curves hyperbolicity invent math paulin gromov topology topology appl reynolds indecomposable trees boundary outer space geom dedic reducing systems small trees rubin schapiro maps onto spaces finite cohomological dimension topol appl mladen bestvina department mathematics university utah south east jwb salt lake city utah united states bestvina camille horbez laboratoire orsay univ cnrs orsay france richard wade department mathematics university british columbia mathematics road vancouver canada email wade
4
accepted ieee transactions cognitive developmental systems tcds seamless integration coordination cognitive skills humanoid robots deep learning approach jungsik hwang jun tani study investigates adequate coordination among different cognitive processes humanoid robot developed learning direct perception visuomotor stream propose deep dynamic neural network model built dynamic vision network motor generation network network proposed model designed process integrate direct perception dynamic visuomotor patterns hierarchical model characterized different spatial temporal constraints imposed level conducted synthetic robotic experiments robot learned read human intention observing gestures generate corresponding actions results verify proposed model able learn tutored skills generalize novel situations model showed synergic coordination perception action decision making integrated coordinated set cognitive skills including visual perception intention reading attention switching working memory action preparation execution seamless manner analysis reveals coherent internal representations emerged level hierarchy representation reflecting actional intention developed means continuous integration stream index deep learning sensorimotor learning work supported national research foundation korea nrf grant funded korea government msip jungsik hwang school electrical engineering korea advanced institute science technology daejeon south korea jun tani corresponding author professor school electrical engineering korea advanced institute science technology daejeon south korea adjunct professor okinawa institute science technology okinawa japan introduction would desirable robot could learn generate complex behaviors sensorimotor experience human beings one challenge reaching goal complex behaviors require agent coordinate multiple cognitive processes instance imagine robot conducting object manipulation task human partner reaching grasping object human partner indicates target objects located workspace gesture robot observes workspace finds indicated object combining perceived gesture information well perceived object properties switches attention object prepares action executes even simple task complex involving diverse cognitive skills visual perception intention reading working memory action preparation execution essential link skills synergy developing coordination among furthermore skills ideally arise robot experience reaching grasping objects example rather features reflecting human engineer understanding given task may require study employed deep learning approach build robotic system directly autonomously learn visuomotor experience deep learning field machine learning artificial intelligence remarkable advances text recognition speech recognition image recognition many others see recent reviews deep learning one important characteristics deep learning deep networks autonomously extract features data images action sequences without necessity feature extraction methods deep learning provides important tool robotics deep learning robot learn directly sensorimotor data acquired dynamic interaction environment recent studies demonstrated plausibility deep learning field robotics however several challenges adapting deep learning schemes robotics remain example robotic system must process dynamic patterns whereas deep learning schemes generally designed process static patterns addition robotic tasks typically incorporate multiple sensory modalities vision proprioception audition deep learning applications attend single modalities visual facial recognition also sigaud droniou pointed still unclear representations built stacking several networks paper propose dynamic deep neural network model called deep dynamic neural network vmdnn learn generate behaviors coordinating multiple cognitive processes including visual perception intention attention switching memorization retrieval working memory action preparation generation model designed process integrate dynamic visuomotor patterns directly perceived robot interaction environment vmdnn composed three different types subnetwork multiple scales neural network mstnn multiple timescales recurrent neural network mtrnn pfc prefrontal cortex subnetworks mstnn demonstrated ability recognize dynamic visual scenes mtrnn learn compositional actions vmdnn model two subnetworks tightly coupled via pfc subnetwork enabling system process dynamic visuomotor patterns simultaneously words problems perceiving visual patterns generating motor patterns regarded inseparable problem therefore visual perception motor generation performed simultaneously single vmdnn model current study approach based previous studies emphasized importance coupling robotic manipulation well developmental robotics part architecture imposed different spatial temporal constraints enable hierarchical computation visuomotor processing abstraction conducted set synthetic robotics experiments examine proposed model also gain insight mechanisms involved learning actions biological systems worth noting artificial neural networks meant model essential features nervous system detailed implementation experiments humanoid robot learned actions visuomotor data acquired repeated tutoring robot actions guided experimenter investigated set cognitive skills integrated coordinated seamless manner generate sequential behaviors robot particularly focused identifying human gestures reading intention underlying generating corresponding sequential behaviors humanoid robot task experiment robot trained recognize human gestures grasp target object indicated gestures task thus required set cognitive skills visual perception intention reading working memory action preparation execution recognizing gestures challenging recognizing static images term reading categorizing used rather recognizing recognition may involve reconstruction patterns categorization current study since spatial temporal information gesture need identified moreover reading intention others observing behavior considered one core abilities required social cognition addition reaching grasping fundamental skills significant influences development perceptual cognitive abilities task extensively studied child development well robotics robotic context require robust perception action systems well simultaneous coordination set cognitive skills making features demanding moreover visuomotor task experiment explicitly segmented different gesture classification action generation experimenter therefore task requires robot adapt different task phases autonomously coordinating set cognitive skills seamless manner throughout task addition task requires robot working memory capability keep contextual information robot could compare categorized human intention perceived object properties implies simply mapping perception action perform task successfully since lacks contextual dynamics expected synergic coordination mentioned cognitive skills would arise learning tightly coupled structure multiple subnetworks densely interact learning stage robot learned task supervised manner testing stage examined model learning generalization capabilities furthermore robot examined visual occlusion experimental paradigm visual input model unexpectedly completely occluded verify whether proposed model equipped sort memory capability maintaining information addition analyzed neuronal activation order clarify internal dynamics representations emerging model different phases operation remaining part paper organized follows section review several previous studies employing deep learning schemes robotic context section iii introduce proposed model detail section devoted experimental settings results respectively several key aspects implications proposed model discussed section finally conclude paper indicate current future research directions section vii related works due remarkable success deep learning various fields recent studies attempted employ deep learning field robotics see recent review instance nuovo employed deep neural network architecture order study number cognition humanoid robot robot trained classify numbers learning association auditory signal corresponding finger counting activity found model quicker accurate modalities associated similarly droniou introduced deep network architecture could learn different sensory modalities including vision audition proprioception experiments model trained classify handwritten digits demonstrated learning across multiple modalities significantly improved classification performance lee employed deep learning approach reading human intention supervised mtrnn model employed experiments showed model could successfully recognize human intention observing set motions lenz proposed cascaded detection system detect robotic grasps view scene conducted experiments different robotic platforms found deep learning method performed significantly better one features pinto gupta also addressed problem detecting robotic grasps adopting convolutional neural network cnn model predict grasp location angle although studies demonstrate utility deep learning robotic context focused robotic perception robots directly controlled deep learning schemes recent studies attempted utilize deep learning control robot addressed problem introducing reinforcement learning algorithm enabled agent learn control policy pixel information deep feedforward neural network employed model robotic pendulum used testing platform model able learn policies continuous spaces directly pixel information noda introduced deep computational framework designed integrate sensorimotor data humanoid robot used object manipulation experiments showed model able form multimodal representations learning sensorimotor information including joint angles rgb images audio data levine proposed deep neural network model learned control policy linked raw image percepts motor torques robot showed robot able conduct various object manipulation tasks learning perception control together manner park tani investigated robot could infer underlying intention human gestures generate corresponding behaviors humanoid robot work mtrnn model employed robot could successfully achieve task extracting compositional semantic rules latent various combinations human gestures several problems confront ongoing research robotics applications represented limitations existing studies relatively simple testing platform separate processing individual modalities inability handle temporal information current study aim address challenges deep dynamic neural network model humanoid robot process integrate dynamic visuomotor patterns manner iii deep neural network model section describe deep dynamic neural network vmdnn detail proposed model designed process integrate direct perception visuomotor patterns hierarchical structure characterized different constraints imposed part hierarchy several distinctive characteristics first model perform visuomotor processing without feature extraction methods means deep learning schemes second model processes dynamic visuomotor patterns hierarchical structure essential cortical computation third perception action tightly intertwined within system enabling model form multimodal representations across sensory modalities vmdnn model consists three types subnetworks mstnn subnetworks processing dynamic visual images mtrnn subnetworks controlling robot action attention prefrontal cortex pfc subnetwork located top two subnetworks dynamically integrates fig fig vmdnn model consists three types subnetworks mstnn subnetworks dynamic visual image processing left mtrnn subnetworks action generation right pfc subnetwork integration perception action top mstnn subnetwork study employed multiple scales neural network mstnn process dynamic visual images perceived robot conducting visuomotor task mstnn extended convolutional neural network cnn employing leaky integrator neural units different time constants although conventional cnn models shown ability process spatial data static images lack ability process dynamic patterns successfully conduct visuomotor task robot needs extract spatial temporal features latent sequential observations unlike conventional cnn models utilize spatial constraints mstnn model shown process spatial temporal patterns imposing multiple scales constraints local neural activity consequently mstnn extract visual features latent dynamic visual images robot tutored task iteratively mstnn subnetwork composed three layers imposed different constraints layer containing current visual scene layer connectivity smaller time constants layer connectivity larger time constants mstnn layer organizes specific set feature maps retaining spatial information visual input connected successively mtrnn subnetwork current study employed multiple timescales recurrent neural network mtrnn generating robot behavior controlling attention robot mtrnn hierarchical neural network model consisting multiple continuous time recurrent neural networks leaky integrator neurons mtrnn shown superior performance modeling robot sequential action utilizing temporal hierarchy specific lower level mtrnn smaller time constant showing fast dynamics whereas higher level bigger time constant exhibiting slow dynamics due temporal hierarchy mtrnn learn compositional action sequences meaningful functional hierarchy emerges within system consequently entire behavior robot including reaching grasping well visual attention control decomposed set primitives flexible recombination adapting various situations model mtrnn subnetwork composed three layers characterized different temporal constraints showing slow dynamics larger time constant showing fast dynamics smaller layer smallest time constant neurons layers asymmetrically connected layer composed groups softmax neurons indicating sparse representation model output layer receives inputs layer generates behavior outputs well attention control signals pfc subnetwork top pfc prefrontal cortex layer tightly couples mstnn mtrnn subnetworks achieving tight association visual perception motor generation pfc layer recurrent neural network consisting set neurons equipped recurrent loops order process abstract sequential information pfc layer receives inputs layer mstnn subnetwork well layer mtrnn subnetwork meaning abstracted visual information proprioceptive information integrated pfc layer pfc layer also forward connection layer control robot behavior attention pfc layer characterized several key features first neurons pfc layer assigned largest time constant result pfc subnetwork exhibits dynamics enables pfc subnetwork carry information situation second neurons pfc layer equipped recurrent connections essential handle dynamic sequential data third perception action coupled via pfc layer integrates two monomodal subnetworks mstnn mtrnn builds multimodal representations abstracted visuomotor information pathway lungarella metta argued perception action separated tightly coupled coupling gradually getting refined developmental process problem formulation fig illustrates structure vmdnn model input model observation world time step obtained robot camera beginning end task observation image represented matrix height width image robot behavior outputs well attention control signals generated output layer time step let denotes output model time step number neurons output layer forward dynamics computation problem defined behavior output attention signal time step given visual observation model parameters kernels weights biases training phase problem defined optimize model parameters order minimize error output layer represented divergence teaching signal network output training data visuomotor sequences obtained repeated tutoring prior training phase detailed description forward dynamics training phase described following sections forward dynamics action generation internal states neural units initialized neutral values onset action generation mode pixel image grayscale obtained robot camera given vision input layer neural unit internal states activations successively computed every layer input layer output layer neuron activation layer transformed analog values using softmax activation control robot joints attention detailed description computational procedure follows txy time step internal state txy dynamic activation neural unit located position ith feature map mstnn layers computed according following formulas tanh time constant feature maps previous layer convolution operator kij kernel connecting jth feature map ith feature map current layer bias please note hyperbolic tangent recommended used activation function enhance convergence pfc layer layer internal state uti dynamic activation yit ith neuron pfc mtrnn layers pfc determined following equations tanh exp exp wij weight jth neural unit ith neural unit sum image obtained robot camera sent input layer model time step action generation mode internal states activations successively computed input layer output layer robot operated based output model including joint position values execution action image acquired robot camera sent input layer model next time step sense image given model considered visual feedback since reflects effect robot action preceding time step training phase model trained supervised learning fashion training data consisted raw visuomotor sequences obtained repeated tutoring robot manually operated experimenter beginning end tutoring visual image perceived robot camera visual observation jointly collected encoder values robot joint positions well level grasping level foveation time step model trained abstract associate visual perception proprioceptive information using visuomotor sequence patterns backpropagation time bptt employed learning values parameters kernels weights biases model prior training learnable parameters visual pathway pfc initialized means studies demonstrated efficient method initializing network parameters method study similar visual part model prior learning phase experiment study softmax output layer connected pfc layer connections mtrnn subnetworks pfc layer well recurrent connections within pfc layer removed condition system operates mstnn model trained typical classifier using bptt described values parameters visual pathway acquired used initial values parameters pathways training phases experiment training conducted layer layer training model entire learnable parameters updated minimize error layer represented divergence teaching signal network output yit stochastic gradient descent method applied training entire learnable parameters updated training data presented follows index learning step learning rate weight decay method used prevent overfitting weight decay rate experiment settings robotic platform icub humanoid robot consisting degrees freedom dofs distributed body used simulation icub experiments icub simulator accurately models actual robot physical interaction environment making adequate research platform studying developmental robotics simulation icub shown fig simulation environment screen located front robot display human gestures task table placed screen robot two objects placed task table time step interfacing program captured image perceived robot camera preprocessed captured image sent vmdnn model received output vmdnn model operated robot based output model specific pixel image obtained robot camera embedded left eye passed vision input layer model time step obtained image preprocessed resizing converting grayscale normalizing softmax values output layer converted values corresponding joint positions dofs right arm dofs neck well grasping attention control signals interfacing program operated robot based vmdnn model outputs using motor controller provided icub software package interfacing program captured image robot camera sent back vmdnn model next time step fig experimental settings using icub simulator four types human gestures two objects consisting tall long object placed task space robot grasped target object indicated human gesture object locations task space regarding output robot behavior used robot right arm consisting dofs shoulder pitch roll yaw elbow wrist pronosupination pitch yaw addition network output level extension flexion finger joints order control grasping similar level varied grasping fully grasping used control dofs right hand experiments vmdnn model also controlled visual attention essential generate adequate robot behavior jeong demonstrated mtrnn could seamlessly coordinate visual attention motor behaviors work mtrnn model outputted category object attended external visual guiding system localized position specified object based network output retina image robot camera could fixate object contrast external module employed study vmdnn model controlled robot visual attention specifically two attention control mechanisms employed study attention shift foveation first robot located object attention center visual scene orienting head dofs neck pitch yaw second robot controlled resolution visual scene given network controlling level foveation minimum foveation maximum foveation instance level foveation increased robot hand closely approaching attended object consequently images containing target object robot hand given network higher resolution model could clearly perceive object properties including orientation location hand network configuration vmdnn model composed layers layers mstnn subnetwork pfc layer layers mtrnn subnetwork structure vmdnn model used study found empirically preliminary experiments note structure vmdnn model including number layers subnetwork extended depending complexity task since deeper structure enhance learning complex functions visuomotor patterns instance number layers mstnn subnetwork employed process complex visual images similarly complex robot behavior learned employing number layers mtrnn subnetwork reported mstnn layer consisted set feature maps retaining spatial information visual input number size feature maps varied layers vision input layer single feature map containing current visual scene obtained left eye robot layer consisted feature maps sized layer consisted feature maps sized size kernels layers set respectively sampling factors denoting amount shift kernel convolution operation set respectively pfc layer composed neurons kernel size sampling factor set numbers neurons employed mtrnn layers respectively neurons layer comprised groups softmax neurons representing categories model output joint position values robot neck joint position values right arm level grasping level foveation softmax output values group layer inversely transformed analog values directly set joint angles robots level grasping level foveation regarding time scale properties compared two different types visual pathway cnn mstnn cnn condition time constants layers set resulting temporal hierarchy hand mstnn condition time constants layers set respectively resulting temporal hierarchy visual pathway also compared two different temporal scales fast slow pfc layer time constant set fast pfc condition whereas set slow pfc condition sum different network conditions examined experiments throughout experiments time constants layers fixed respectively proper values time constant level model found heuristically preliminary study prior learning learnable parameters initialized values acquired stage enhance learning capability stage model trained grasp object without human gestures also visual pathway additionally classify four types human gestures described section iii learning network trained epochs learning rate task objective task grasp target object indicated human gesture displayed screen beginning task overall task flow follows beginning task robot set home position orienting head toward screen front robot stretching arms sideways home position robot observed human gesture displayed screen fig observing gesture robot oriented head task table two objects consisting long tall object placed observing task space robot figured target object indicated human gesture oriented head target object robot reached right arm grasped target object example human gesture indicated tall object robot pick tall object among two available along information orientation similarly human gesture indicated right side robot figure type orientation object right side see supplementary video task therefore inherently required robot working memory capability maintain human gesture information throughout task phases dynamically combine perceived object properties visuomotor sequence training trial consisted images perceived robot camera visual observation values robot joint positions values grasping foveation signals collected simultaneously beginning end tutoring consequently images visuomotor sequences included ones perceived observing human gestures screen well ones perceived completing behavioral task similarly values robot joint positions visuomotor sequences included ones recorded observing human gestures well ones recorded acting task space observation robot trained trials consisting varying object configurations human gestures two objects consisting one tall one long object presented trial location orientation two objects controlled way presenting two objects bias robot behavior toward certain reaching grasping behaviors away others regarding human gestures target object specified one four different human gestures indicating location either left right type either tall long target object fig video clip human gesture displayed screen front robot collected several gesture trials human subjects selected amongst type gesture appeared number times training dataset throughout experiments two different types objects used tall object size long object size objects placed different orientations positions symmetrically distributed task space fig testing stage evaluated model performance respect learned trials well set novel situations order examine model generalization capability first tested model trials two objects randomly located obj examine whether robot able generalize reaching grasping skills unlearned object positions orientations also examined model generalization capability respect human gestures using training trials gestures novel subject sub examined model generalization capability cases novel situation randomly located objects indicated gestures novel subject obj sub noted main focus model ability coordinate cognitive skills gesture recognition working memory rather solely gesture recognition since latter concerned typical perceptual classification tasks addition evaluated model visual occlusion experimental paradigm vision input network completely occluded main focus evaluation verify whether network equipped sort internal memory enabling show robust behavior even visual information unexpectedly completely occluded training trials testing trials obj sub selected trials respectively visual occlusion experiment vision input network occluded onset observing task space onset attending target object onset observing target object onset reaching reaching prior testing learnable parameters initialized ones obtained training results generalization performances table shows success rate network condition trial evaluated successful robot grasped lifted object failure otherwise general mstnn vision slow pfc condition showed better performance conditions condition model successfully learned training trials able generalize learned skills different testing conditions model able generalize reaching grasping skills objects randomly located obj gestures novel subject displayed sub randomly located objects specified novel subject obj sub worth noting model demonstrated relatively low success rates fast pfc conditions especially cnn vision fast pfc condition task required robot maintain human gesture information displayed beginning trial throughout task phases end model equipped temporal hierarchy performed significantly better without table success rate four network conditions network conditions type pfc vision scales layer testing conditions obj sub obj sub cnn fast cnn slow mstnn fast mstnn slow table percentage task failure caused confusion network conditions type pfc scales vision layer testing conditions obj sub cnn fast cnn slow mstnn fast mstnn slow analyzed failure cases training trials novel trials obj sub particularly focused task failure caused apparent confusion objects cases robot simply grasped incorrect object table shows percentage error caused confusion four network conditions model showed least confusion errors mstnn slow pfc condition moreover error caused confusion pronounced fast pfc slow pfc conditions result implies model able maintain correct stable representations means slow dynamics pfc layer fig success rate four different network conditions visual occlusion experiment testing learned human gestures object positions orientations testing unlearned human gestures object positions orientations sub pos fig illustrates success rate network condition respect training trials testing trials obj sub different occlusion timings vision input model occluded performance four different network conditions differed significantly expected model performance conditions generally degraded vision input occluded earlier task phases especially model showed relatively worse performance cnn vision fast pfc condition conditions therefore inferred memory capability achieved slow dynamics subnetwork higher level plays important role vision input occluded means memory robot able maintain information human gestures well information target objects including position type orientation throughout phases task execution result shows importance internal contextual dynamics proposed model highlights difference proposed model previous study prone occlusion due lack capability keeping memory development internal representation order reveal model internal representation analyzed neural activation training trials using stochastic neighbor embedding dimensionality reduction algorithm analysis parameters initial dimensions perplexity set respectively fig depicts internal representation emerging layer three different task phases point indicates single training trial distances points represent relative similarity trials color shape point denote type human gesture type object respectively number next point indicates object position fig internal representations emerging three different task phases layer mstnn slow pfc point represents single trial distance points represents similarity trials shape color point indicate type target object type human gesture respectively number next point signifies object position cases numbers omitted due overlap please note mode grasping different depending type object focused relationships patterns plot axes varying plots mstnn layers sequential visual images abstracted temporal spatial dimensions hierarchical processing instance robot begins observation task space five clusters observed layer clusters correspond five possible pair locations employed training objects appearing position appeared cluster regardless type object type human gesture layer representations separated type target object differentiated within clusters reflecting relative locations two objects similarly phase layer layer encoded type location target object distinction respect object type less clear layer transitions representations reflecting types human gesture reflecting specific target objects organized pfc layer observing gesture four clusters reflecting type gesture appeared suggesting human gestures successfully recognized internal representations started develop progressively smaller clusters indicating specific target objects emerged phase regardless presented human gesture interpretation model read human intention translated robot intention reach grasp specified object specific visual images containing human gestures processed hierarchically mstnn subnetworks abstracted representation underlying one four human intentions appeared pfc layer robot simultaneously incorporated perceived object properties including location type orientation formed intention reaching grasping target object end result suggests robot intention explicitly mapped arose dynamically perceived information layer four clusters representing type gesture appeared end human gesture observation onset target object observation implies proprioception calibrated based perceived human gestures robot exhibited different behaviors depending target object layer encoded type location target object differences respect object location less clear appearing pfc layer layer internal representations developed similarly less clearly layer instance layer encoded type location object representations less clear pfc fig development internal representations pfc exemplar cases trials consisting different configurations target object indicated different types human gesture examined using algorithm line colors indicate type human gesture markers indicate task phase arrow indicates direction time step end trajectories configuration target object specified analyzed development internal representations exemplar cases pfc layers using algorithm fig total number representative training trials consisting different target objects indicated different human gestures compared analysis number initial dimensions perplexity set respectively fig illustrates proposed model dynamically integrated perception computed motor plans pfc layer developed representations encompassing visual motor information whenever available instance pfc layer identified type gesture even end human gesture presentation robot observing gestures representations pfc layer already started develop differently according type gesture robot started observing task space representations developed differently depending object features location type analysis shows layer calibrated based perceived human gesture example robot started observing task space representations layer began differentiate depending perceived human gesture implication proprioception calibrated based perceived visual information representations layer developed similarly layer showing similar development representations target object closely related robot current action sum layer mainly encoded motor actions mediated cognition pfc motor actions fig development first three principal components pcs layer two opposite cases cnn fast pfc condition mstnn slow pfc condition numbers bottom horizontal lines indicate time step beginning task end task colors denote component score values legends omitted since focus similarities development neuronal activation different occlusion timings order clarify differences internal dynamics model vision input occluded compared two different timings two diametrically opposed cases occluded occluded onset reaching cnn fast pfc condition mstnn slow pfc condition development first three principal components pcs layer target object depicted beginning end task fig cnn fast pfc condition robot failed grasp target object visual input completely occluded onset reaching however able grasp target object visual input occluded analysis clearly reveals pcs pfc layers developed differently cases implying model form consistent representations mstnn slow pfc condition robot able grasp target object cases analysis reveals development pcs pfc layers occluded cases similar especially vision input occluded onset reaching activation developed similarly case implying development coherent proprioceptive representations regardless occlusion conditions sum pfc layer well mtrnn layers developed consistent representations even visual information lost initiation reaching action resulting successful task performance discussion throughout experiments verified several key aspects proposed model section discuss detail coordinated dynamic structure proposed model developed coordinated dynamic mechanism whole network enabling robot learn behaviors seamless coordination cognitive skills terms downward causation constraints imposed level hierarchy learning performed tightly coupled structure two key factors achieving coordinated dynamic mechanism first multiple scales constraint enabled model dynamically compose conceptual representations higher level assembling perceptual features lower level suggesting emergence different cognitive functionalities hierarchy sporns argues cognitive functions develop human brains anatomical constraints including connectivity timescales among local regions similarly different constraints imposed different parts proposed model lead development different cognitive functionalities level hierarchy example pfc layer mainly encoded information whereas mtrnn layers encoded information related robot current action specifically proprioception layer developed representation closely reflecting robot current action whereas proprioception layer played role mediating cognition pfc proprioception result analogous findings information encoded pfc arm movements encoded primary motor cortex monkeys second learning performed tightly coupled structure enabled model form coordinated representations without explicitly encapsulating perceptual modality perception action decision making present model coupling perception action achieved connecting two different deep networks pfc layer coupling enabled model generate motor plans dynamically response visual perceptual flow finding line previous studies importance coupling perception action emphasized analysis neural activation shows dynamic motor plan generation involved continuous integration visual perceptual flow rather directly mapped visual perception dynamic transformation visual information behavioral information also observed experiments macaque monkey brains multimodal representation helps distinguishing object type orientation competency multimodal information integration considered essential embodied cognition ultimately present model able develop multimodal representations abstracting associating visual perception proprioceptive information different pathways words higher level representations based visual information object corresponding human gesture also information movement trajectories resulting successful grasp target object memory capability results indicate proposed model capable developing employing working memory found robot able maintain information higher levels throughout task phases dynamically combining object percepts instance robot maintained human intention categorized beginning task combined object percept could reach grasp target object furthermore proposed model performed robustly even visual input completely unexpectedly occluded memory capability achieved temporal hierarchy model well recurrent connections pfc layer particularly time constants larger model showed robust performance various circumstances including experiments novel object configurations well ones unexpected visual occlusion although suitable values time constant level might differ depending task finding study suggests progressively larger time constants architecture played important role form coordinated dynamic structure including memory capability finding also consistency previous studies shown importance similar temporal hierarchy multiple timescales neural network mstnn mtrnn internal contextual dynamics proposed model highlights key difference proposed model previous study lacked capability keeping memory furthermore model able action prior action execution specifically neuronal activation proprioception layer calibrated based perceived visual images prior action execution capability played particularly important role visual occlusion experiment enabling robot reach grasp target object even without monitoring target object hand reaching result analogous findings reported neurons brain macaque monkey encoded information even movement intended vii conclusion current study introduced deep dynamic neural network vmdnn model learn read human intention generate corresponding behaviors robots coordinating multiple cognitive processes including visual recognition attention switching memorizing retrieving working memory action preparation generation seamless manner simulation study model using icub simulator revealed robot could categorize human intention observing gestures preserve contextual information execute corresponding actions analysis showed synergic coordination among cognitive processes developed learning tutored experience performed whole network allowing dense interaction subnetworks conclusion aforementioned cognitive mechanism developed means downward causation terms scale differentiation among local subnetworks topological connectivity among way interacting coupling several research directions suggested study first experiments incorporating cognitive skills well sensory modalities need conducted better understanding mechanisms learning actions human biological neural systems second scalability proposed model need examined experiments real robot larger variety objects complexity task model perform seems proportional size learnable parameters including number layers subnetwork number neurons layer third robot sensorimotor experience study acquired tutoring experimenter cases tutoring may impossible would worth investigating experience obtained autonomously references sigaud droniou towards deep developmental learning ieee transactions cognitive developmental systems vol droniou ivaldi sigaud deep unsupervised network multimodal perception representation classification robotics autonomous systems vol guerin kruger kraft survey ontogeny tool use sensorimotor experience planning ieee transactions autonomous mental development vol lungarella metta pfeifer sandini developmental robotics survey connection science vol cangelosi schlesinger smith developmental robotics babies robots mit press berthouze ziemke epigenetic cognitive development robotic systems connection science vol asada macdorman ishiguro kuniyoshi cognitive developmental robotics new paradigm design humanoid robots robotics autonomous systems vol taniguchi nagai nakamura iwahashi ogata asoh symbol emergence robotics survey advanced robotics vol tani compositionality cognitive brains neurorobotics study proceedings ieee vol bengio courville vincent representation learning review new perspectives pattern analysis machine intelligence ieee transactions vol lecun bengio hinton deep learning nature vol schmidhuber deep learning neural networks overview neural networks vol levine finn darrell abbeel training deep visuomotor policies arxiv preprint lenz lee saxena deep learning detecting robotic grasps international journal robotics research vol nuovo cruz cangelosi deep learning neural network number cognition study icub joint ieee international conference development learning epigenetic robotics deisenroth pixels torques policy learning deep dynamical models arxiv preprint jung hwang tani hierarchy via learning dynamic visual image patterns action sequences plos one yamashita tani emergence functional hierarchy multiple timescale neural network model humanoid robot experiment plos computational biology vol savastano nolfi incremental learning dof simulated icub robot modeling infant development biomimetic biohybrid systems first international conference living machines barcelona spain july proceedings prescott lepora mura verschure berlin heidelberg springer berlin heidelberg savastano nolfi robotic model reaching grasping development ieee transactions autonomous mental development vol mcclelland botvinick noelle plaut rogers seidenberg letting structure emerge connectionist dynamical systems approaches cognition trends cognitive sciences vol dominey warneken basis shared intentions human robot cognition new ideas psychology vol tomasello carpenter call behne moll understanding sharing intentions origins cultural cognition behav brain sci vol discussion oct vernon thill ziemke role intention cognitive robotics toward robotic socially believable behaving systems volume modeling emotions esposito jain cham springer international publishing mccarty clifton ashmead lee goubet infants use vision grasping objects child dev vol lasky effect visual feedback hand reaching retrieval behavior young infants child dev vol mar oztop bradley arbib infant grasp learning computational model exp brain res vol oct ugur oztop motor primitives learning grasp affordances international conference intelligent robots systems ugur nagai sahin oztop staged development robot skills behavior formation affordance learning imitation motionese ieee transactions autonomous mental development vol lee human motion based intent recognition using deep dynamic neural model robotics autonomous systems vol pinto gupta supersizing learning grasp tries robot hours arxiv preprint noda arie suga ogata multimodal integration learning robot behavior using deep neural networks robotics autonomous systems vol park tani development compositional contextual communication robots using multiple timescales dynamic neural network presented joint ieee international conference development learning epigenetic robotics rhode island usa nishimoto tani development hierarchical structures actions motor imagery constructivist view synthetic study psychol res vol jul krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advances neural information processing systems jung hwang tani multiple scales neural network contextual visual recognition human actions joint ieee international conferences development learning epigenetic robotics jeong arie lee tani study integrative learning proactive visual attention motor behaviors cognitive neurodynamics vol celikkanat sahin kalkan recurrent slow feature analysis developing object permanence robots presented iros workshop neuroscience robotics tokyo japan dominey hoen blanc neurological basis language sequential cognition evidence simulation aphasia erp studies brain lang vol aug lungarella metta beyond gazing pointing reaching survey developmental robotics vol prince berthouze kozima bullock stojanov balkenius lund university cognitive studies lecun bottou orr efficient backprop neural networks tricks trade vol montavon orr springer berlin heidelberg rumelhart mcclelland group parallel distributed processing vol mit press bengio lamblin popovici larochelle greedy training deep networks advances neural information processing systems vol tsagarakis metta sandini vernon beira becchi icub design realization open humanoid platform cognitive neuroscience research advanced robotics vol tikhanoff cangelosi fitzpatrick metta natale nori simulator cognitive robotics research prototype icub humanoid robot simulator presented proceedings workshop performance metrics intelligent systems gaithersburg maryland hwang jung madapana kim choi tani achieving synergy cognitive behavior humanoids via deep learning dynamic coordination international conference humanoid robots humanoids van der maaten hinton visualizing data using journal machine learning research vol sporns networks brain cambridge mit press mushiake saito sakamoto itoyama tanji activity lateral prefrontal cortex reflects multiple steps future events action plans neuron vol may franquemont black donoghue linking objects actions encoding target object grasping strategy primate ventral premotor cortex journal neuroscience vol meyer damasio convergence divergence neural architecture recognition memory trends neurosci vol jul georgeon marshall manzotti eca enactivist cognitive architecture based sensorimotor modeling biologically inspired cognitive architectures vol carpaneto umilta fogassi murata gallese micera decoding activity grasping neurons recorded ventral premotor area macaque monkey neuroscience vol aug
2
minimum distance functions complete intersections oct yuriko pitones rafael villarreal abstract study minimum distance function complete intersection graded ideal polynomial ring coefficients field graded ideals dimension one whose initial ideal complete intersection use footprint function give sharp lower bound minimum distance function show applications coding theory introduction let polynomial ring field standard grading let graded ideal degree multiplicity denoted deg fix graded monomial order let initial ideal footprint denoted set monomials ideal notion occurs branches mathematics different names see list alternative names given integer let set degree let set zero combination monomials degree footprint function denoted fpi function fpi given deg max deg fpi deg minimum distance function denoted function given deg max deg deg two functions introduced studied notice independent monomial order see lemma compute difficult problem compute fpi much easier come main result paper gives explicit lower bound formula fpi family complete intersection graded ideals theorem initial ideal complete intersection height generated deg fpi integers mathematics subject classification primary secondary first third author supported sni second author supported conacyt yuriko pitones rafael villarreal important case theorem viewpoint applications vanishing ideal finite set projective points finite field see discussion connection fpi coding theory complete intersection monomial ideal dimension fpi see proposition case theoretical interest proposition monomial ideal vanishing ideal particular cases let graded ideal complete intersection dimension give formula degree easy classification complete intersection property see lemma basically two cases consider one lemma following lemma generated tds tas deg show main result use formula degree ring use proposition bound degrees uniformly proof main result takes place abstract algebraic setting reference vanishing ideals finite fields formulas degree useful following setting vanishing ideal finite set projective points generated tds lemma used give upper bounds number zeros homogeneous polynomials fact tas corollary one set zeros variety upper bound depends exponent leading term complex upper bound obtained initial ideal lemma case one uses formula degree given lemma interest studying fpi comes algebraic coding theory indeed vanishing ideal finite subset projective space finite field minimum distance corresponding projective code equal fpi lower bound see theorem lemma therefore one formula deg max means zero function abstract study minimum distance footprint functions provides fresh techniques study degree equal lecture hence using main result get following uniform upper bound number zeros polynomials vanish points corollary initial ideal complete intersection generated deg vanish point integers result gives tool finding good uniform upper bounds number zeros polynomials finite fields problem fundamental interest algebraic coding complete intersections theory algebraic geometry leave open question whether uniform bound optimal whether equality attained polynomial van tuyl conjectured conjecture vanishing ideal complete intersection generated polynomials degrees corollary conjecture true complete intersection leave another open question whether corollary true assume complete intersection proposition illustrate use corollary concrete situation consider lexicographical order projective torus finite field generated basis initial ideal complete intersection generated therefore noticing deg equal setting obtain homogeneous polynomial degree vanishing points zeros unique integers uniform bound given theorem seen bound fact optimal constructing appropriate polynomial fpi say ideal vanishing ideals finite fields notion essentially another way saying bound optimal first interesting family ideals equality holds due geil theorem result essentially shows fpi graded lexicographical order homogenization vanishing ideal affine space finite field recently carvalho proposition extended result replacing cartesian product subsets case underlying code called affine cartesian code explicit formula minimum distance first given recent paper bishnoi clark potukuchi schmitt give another proof formula theorem using result alon theorem see also two relevant applications main result algebraic coding theory recover formula minimum distance affine cartesian code given theorem proposition fact homogenization corresponding vanishing ideal ideal see corollary present extension result alon theorem terms regularity vanishing coverings cube affine hyperplanes applied finite subset projective space whose vanishing ideal complete intersection initial ideal see corollary example finally using exemplify results used practice show vanishing ideal computing possible initial ideals see example section introduce projective codes present results terminology needed paper unexplained terminology additional information refer deeper advances knowledge degree theory bases commutative algebra hilbert functions theory codes linear codes yuriko pitones rafael villarreal preliminaries section present results needed throughout paper introduce notation results section let graded polynomial ring field standard grading let graded ideal hilbert function dimk dimension mean krull dimension degree multiplicity positive integer lim deg dimk regularity hilbert function simply regularity denoted reg least integer equal hilbert polynomial let monomial order let ideal leading monomial denoted initial ideal denoted monomial called standard monomial respect ideal polynomial called standard combination standard monomials set standard monomials denoted called footprint graded number standard monomials degree lemma let monomial order let ideal let polynomial positive degree regular regular proof let polynomial suffices show pick basis division algorithm theorem write standard polynomial need show therefore contradiction remark given integer map given follows lemma monomial ideal projective codes let finite field elements let projective space let subset usual points denoted paragraph results valid assume field finite subset instead assuming finite however interesting case coding theory finite vanishing ideal denoted ideal generated homogeneous polynomials vanish points case hilbert function denoted let set representatives points fix degree indeed suppose least one setting tdk one map evd complete intersections map evd called evaluation map image evd denoted called projective code degree also called evaluation code associated type codes studied using commutative algebra methods especially hilbert functions see references therein definition linear code linear subspace basic parameters linear code length dimension dimk minimum distance min kvk kvk number entries lemma lemma map evd independent set representatives choose points basic parameters code independent following summarizes relation projective codes theory hilbert functions notice items follow directly item iii respectively proposition following hold iii dimk lecture deg singleton bound reg next result gives algebraic formulation minimum distance projective code terms degree structure underlying vanishing ideal theorem theorem result gives algorithm implemented cocoa singular compute small values cardinality number variables see procedure example using sage one also compute finding generator matrix direct consequence theorem one deg max zero set means vanish points next lemma follows using division algorithm problem lemma let finite subset let point let vanishing ideal prime ideal deg primary decomposition remark finite set projective points reduced graded ring dimension follows directly lemma particular regularity hilbert function regularity ideal called unmixed associated primes height next result classifies monomial vanishing ideals finite sets projective space yuriko pitones rafael villarreal proposition let finite subset following equivalent monomial ideal generated variables unit vector proof remark radical graded ideal dimension hence unmixed monomial ideal height therefore equal face ideals ideals generated variables height let zariski closure finite one thus suffices notice eij follows lemma complete intersections let polynomial ring field standard grading ideal called complete intersection exist height follows monomial order mean graded monomial order sense defined first total degree lemma let ideal generated monomials dim complete intersection permutation variables write tds tpp tds proof let minimal set generators consisting monomials monomials form regular sequence hence common variables either variables occur case permutation variables one variable case permutation variables cases ideal height generated elements complete intersection proposition propositions let two standard graded algebras field polynomial rings disjoint sets variables ideal hilbert series respectively lemma let ideal generated tds tar tas deg deg tar tas complete intersections proof follows use fact hilbert functions hilbert series additive short exact sequences chapter proposition taking hilbert functions exact sequence tar tas noticing dim first equality follows thus may assume form tar tas proceed induction assume degree required assume tar complete intersection required formula follows corollary thus may assume exact sequence tar tas tds trdr tar tds notice ring right complete intersection ring left isomorphic tensor product trdr tds tas hence taking hilbert series applying corollary theorem proposition write hilbert series xar xdr xar xds hilbert series second ring tensor product degree induction hypothesis therefore writing recalling degree get deg tpp lemma let ideal generated tds degree equal yuriko pitones rafael villarreal let graded ideal integers fpi proof case assume tpp first equality lemma using corollary get tpp deg deg tds tpp deg tpp tds required may assume one therefore exact sequence tds tpp tas tpp tds using lemma corollary required equality follows case assume tpp hence corollary get deg deg tpp tds required may assume consider exact sequence tpp tds tas tpp tds tas subcase assume situation one taking hilbert series noticing ring right dimension get deg required thus may assume taking hilbert series using corollary obtain deg tpp deg tds tas therefore using lemma required equality follows subcase assume taking hilbert series noticing ring right dimension lemma get deg complete intersections required thus may assume taking hilbert series applying lemma ends required equality follows suffices find monomial deg five cases consider case formulas degree part get equality quotient ideal respect given lemma lemma let unmixed graded ideal let monomial order homogeneous deg deg deg deg deg unmixed radical ideal remark let unmixed graded ideal dimension dim case deg could greater deg lemma lemma let finite subset field let graded vanishing ideal homogeneous number zeros given deg corollary let vanishing ideal finite set projective points let tas generated tds proof lemma hence using standard monomial get therefore using lemma together lemmas get deg deg lemma let graded ideal let monomial order minimum distance function independent proof fix positive integer let set homogeneous degree let element pick basis division algorithm theorem write yuriko pitones rafael villarreal homogeneous standard polynomial degree since get hence get equalities deg max deg deg max deg depend lemma let unmixed graded ideal monomial order following hold fpi fpi unmixed radical proof clearly deg fpi unmixed fpi follows lemma thus hold assume pick standard polynomial deg deg unmixed lemma deg deg hand lemma hence fpi using second inequality lemma follows fpi unmixed radical proposition unmixed monomial ideal monomial order fpi ideal proof inequality fpi follows lemma show reverse inequality notice one also notice follows lemma therefore one fpi proposition let graded ideal let monomial order suppose complete intersection height generated deg following hold example complete intersection dim example lemma deg reg fpi proof rings dimension thus dim graded order homogeneous polynomials since polynomials form basis particular generate hence graded ideal height generated polynomials complete intersection since complete intersection generated degree regularity deg deg deg respectively follows formula hilbert series complete intersection given corollary ideal unmixed part complete intersection particular unmixed hence inequality fpi follows lemma let standard monomial degree using lemma formulas deg given lemma lemma obtain deg deg thus fpi complete intersections proposition proposition let integers come main result paper theorem let graded ideal let graded monomial order initial ideal complete intersection height generated deg fpi fpi integers proof let standard monomial degree thus set substitute expression follows inequality fpi equivalent show deg deg convention recall proposition one fpi lemma permuting variables changing accordingly one following two cases consider case assume tds write tar tas lemma get deg setting tds one using follows deg thus fpi equality fpi holds may assume setting one using get deg hence fpi next show reverse inequality showing inequality holds suffices show following equivalent inequality holds yuriko pitones rafael villarreal inequality follows proposition making tpp case assume tds setting tds get using first formula lemma follows deg thus fpi equality fpi holds may assume inequality fpi follows lemma show fpi need show inequality holds take write tas three subcases consider subcase assume standard monomial lemma get deg therefore inequality equivalent inequality follows proposition making psbp notice subcase assume lemma get deg therefore inequality equivalent inequality follows proposition making notice subcase assume lemma get deg complete intersections therefore inequality equivalent inequality follows proposition making notice case applications examples section devoted give applications examples main result two important applications algebraic coding theory recover formula minimum distance affine cartesian code fact homogenization corresponding vanishing ideal ideal begin basic application complete intersections corollary finite subset complete intersection fpi proof let generator case deg reg proposition theorem one fpi assume pick points lemma vanishing ideal principal ideal generated linear form notice equal setting get homogeneous polynomial degree exactly zeros thus another application get following uniform upper bound number zeroes polynomials vanish points corollary let finite subset let vanishing ideal let monomial order initial ideal complete intersection generated deg deg sdpthat vanish point integers proof follows corollary theorem leave open question whether uniform bound optimal whether equality attained polynomial another open question whether corollary true assume complete intersection related following conjecture van tuyl yuriko pitones rafael villarreal conjecture conjecture let finite set points complete intersection generated deg notice corollary conjecture true complete intersection also true see corollary affine cartesian codes coverings hyperplanes given collection finite subsets field denote image map affine code degree called affine cartesian code basic parameters projective code equal formula minimum distance affine cartesian code given theorem proposition short elegant proof formula given carvalho proposition shows best way study minimum distance affine cartesian code using footprint application theorem also recover formula minimum distance affine cartesian code examining underlying vanishing ideal show ideal corollary let field let projective type code degree finite set minimum distance given fpi unique integers reverse lexicographical order setting one basis whose initial ideal generated tds see proposition theorem one equality thus inequality follows theorem difficult part proof rest argument reduces finding appropriate polynomial equality occurs using minimum distance greater equal reg set propositions regularity degree respectively assume show inequality notice polynomial product linear forms number zeros equal see hence less equal thus required equality holds proposition therefore theorem complete intersections next result extension result alon theorem applied finite subset projective space whose vanishing ideal complete intersection initial ideal relative graded monomial order corollary let finite subset projective space let monomial order complete intersection generated deg hyperplanes avoid point otherwise cover points reg proof linear forms define respectively assume consider polynomial notice theorem fpi hence vanish least two points contradiction example let polynomial ring lexicographical order let vanishing ideal using procedure theorem obtain following information ideal generated regularity degree respectively ideal whose initial ideal complete intersection generated basic parameters code shown following table fpi corollary hyperplanes avoid point otherwise cover points reg lex gens regularity degree degree max apply apply apply apply tolist set hilbertfunction set hilbertfunction tolist basis vector ideal flatten entries quotient degree ideal else gives minimum distance degree apply example let polynomial ring lexicographical order let vanishing ideal yuriko pitones rafael villarreal example using get generated regularity degree respectively ideal complete intersection generated basic parameters code shown following table fpi corollary hyperplanes avoid point otherwise cover points reg next give example graded vanishing ideal finite field carvalho computing possible initial ideals example let projective space field let vanishing ideal using procedure get binomials form universal basis form basis monomial order ideal exactly six different initial ideals fpi ideal basic parameters projective code shown following table fpi load symbol symbol symbol universalgroebnerbasis inl gfan inl gens init quotient init degree ideal init else degree apply flatten entries basis apply regularity acknowledgments thank referee careful reading paper improvements suggested references alon covering cube affine hyperplanes european combin atiyah macdonald introduction commutative algebra reading complete intersections bishnoi clark potukuchi schmitt zeros polynomial finite grid combin probab appear bishnoi clark potukuchi schmitt bound electron notes discrete math carvalho second hamming weight type codes finite fields appl chardin regularity ideals closed formulas applications proc amer math soc electronic cocoateam cocoa system computations commutative algebra available http cox little shea ideals varieties algorithms delsarte goethals macwilliams generalized codes relatives information control duursma codes complete intersections appl algebra engrg comm comput eisenbud commutative algebra view toward algebraic geometry graduate texts mathematics fulton algebraic curves advanced book classics publishing company advanced book program redwood city introduction algebraic geometry notes written collaboration richard weiss reprint original geil second weight generalized codes des codes cryptogr geil thomsen weighted codes revisited des codes cryptogr gold little schenck evaluation codes complete intersections pure appl algebra codes segre variety finite fields appl grayson stillman available via anonymous ftp greuel pfister singular introduction commutative algebra extended edition springer berlin harris algebraic geometry first course graduate texts mathematics new york kreuzer robbiano computational commutative algebra berlin villarreal affine cartesian codes des codes cryptogr sarmiento vaz pinto villarreal parameterized affine codes studia sci math hungar macwilliams sloane theory codes pitones villarreal minimum distance functions graded ideals codes pure appl algebra migliore introduction liaison theory deficiency modules progress mathematics boston boston simis villarreal algebraic methods parameterized codes invariants vanishing ideals finite fields finite fields appl sage mathematical software http sala mora perret sakata traverso eds bases coding cryptography risc book series springer heidelberg sarmiento vaz pinto villarreal minimum distance parameterized codes projective tori appl algebra engrg comm comput schmidt equations finite fields elementary approach lecture notes mathematics york projective codes ieee trans inform theory stanley hilbert functions graded algebras adv math van tuyl bounding invariants fat points using coding theory construction pure appl algebra tsfasman vladut nogin algebraic geometric codes basic notions mathematical surveys monographs american mathematical society providence yuriko pitones rafael villarreal villarreal monomial algebras second edition monographs research notes mathematics chapman vogel lectures results bezout theorem tata institute fundamental research lectures mathematics physics berlin departamento centro estudios avanzados del ipn apartado postal mexico city address jmb departamento centro estudios avanzados del ipn apartado postal mexico city address ypitones departamento centro estudios avanzados del ipn apartado postal mexico city address vila
7
apr matricielles introduction abdeljaoued assistant tunis lombardi henri version mise jour avril nous terrain pour une introduction simple aux outils modernes durant les trois tournant fut par strassen fait une mais une savoir que multiplication deux matrices ordre deux pouvait faire avec seulement sept multiplications non commutatives lieu huit dans anneau base qui ramenait asymptotique multiplication deux matrices ordre lieu faisait descendre pour fois exposant audessous alors que les recherches avaient coefficient dans nombre pour calculer produit deux matrices ordre depuis nombreux outils ont des notions nouvelles sont apparues comme celles rang tensoriel intensive notamment par bini pan strassen winograd autres pan pour exposant heure actuelle sait que est cependant que borne des exposants acceptables serait que pour tout produit deux matrices ordre pourrait par circuit taille profondeur log cependant ces certain sont heure actuelle inapplicables cause notamment constante que grand cache knu par contre strassen trouver une elle commence battre multiplication usuelle dite conventionnelle partir calcul est une technique plein qui distribue calcul faire sur grand nombre processeurs travaillant moment pour multiplication rapide matrices nombre processeurs disponibles est suffisamment grand ordre temps calcul est alors faible ordre log pour des matrices sur corps fini multiplication rapide des matrices nombreuses applications sur les corps par exemple inversion une matrice peut faire avec exposant cependant contrairement multiplication rapide des matrices ces algorithmes sont pas bien calcul ainsi agorithme inversion une matrice auquel vient faire allusion que nous dans section voit jamais son temps calcul descendre dessous log est sur base parfois anciens exhiber des algorithmes bien calcul appuyant sur multiplication rapide des matrices ces algorithmes sont outre des algorithmes sans divisions presque appliquent donc des anneaux commutatifs est cas particulier par astronome verrier plus tard par souriau frame faddeev qui utilisent pour calcul des pour inversion des matrices pour des cette est porteuse algorithme bien calcul csanky qui construit dans cas anneau commutatif contenant corps des rationnels une famille circuits pour calculer les coefficients une autre dite partitionnement gas samuelson regain avec algorithme berkowitz qui fournit calcul rapide sans division cet algorithme permis aux anneaux commutatifs arbitraires csanky concernant par une voie tout fait nous une version section version plus simple algorithme berkowitz utilise pas produits matrices mais seulement des produits une iii matrice par vecteur elle est tout fait efficace sur les ordinateurs usuels bien cas des matrices creuses nous dans cet ouvrage les principaux algorithmes donnons plus des pour calcul avec des une matrice est par fait que ses coefficients suffit cette matrice calculer son adjointe dans cas des corps cela permet calculer son inverse les dans les renseignements que cela donne sur une forme quadratique comme par exemple signature dans cas corps des plan ouvrage nous faisons quelques rappels dans chapitre chapitre contient quelques classiques couramment pour calcul algorithme hessenberg interpolation lagrange algorithme verrier son par samuelson berkowitz plus efficace chistov qui des performances voisines enfin des aux suites les plus efficaces sur les corps finis chapitre formalisme des circuits programmes pour une description formelle des calculs nous expliquons technique importante des divisions elle aussi par strassen dans chapitre nous donnons des principales notions les plus couramment ces notions constituent une tentative les calculs sur ordinateur leur temps espace ils occupent dans chapitre nous expliquons diviser pour gagner bien calcul nous donnons quelques exemples base chapitre est multiplication rapide des avec karatsuba fourier chapitre est multiplication rapide des matrices nous abordons notamment les notions fondamentales rang tensoriel calculs approximatifs chapitre est des algorithmes dans lesquels intervient multiplication rapide des matrices mais sans que ensemble algorithme soit bien calcul obtient ainsi les les plus rapides connues qui concerne temps asymptotique pour plupart des classiques ces performances sont obtenues uniquement sur les corps seule section chapitre algorithme concerne calcul sur anneau commutatif arbitraire chapitre les verrier qui appliquent dans tout anneau commutatif les entiers sont non diviseurs division par entier quand elle est possible est explicite chapitre est aux chistov berkowitz qui appliquent toute chapitre tout abord quelques tableaux des des algorithmes pour calcul celui nous donnons ensuite les des tests concernant quelques calcul ces montrent des performances pour les algorithmes chistov berkowitz avec avantage pour dernier les deux derniers chapitres sont aux travaux valiant sur analogue conjecture dans lesquels permanent occupent une place centrale bien ait peu sur conjecture valiant semble quand moins hors que conjecture algorithmique dont elle inspire annexe contient les codes maple des algorithmes nous avons choisi logiciel calcul formel maple essentiellement pour des raisons langage programmation qui lui est est proche celui nombreux autres langages classiques permettant lisible efficace les algorithmes les autres langages calcul formel auraient aussi bien faire affaire aura ailleurs aucun mal dans ces langages les algorithmes dans livre une liste est dans table page esprit dans lequel est cet ouvrage nous avons des preuves nos accordant une grande place aux exemples mais nous est aussi donner une preuve donner que sur exemple renvoyer une nous assumons consciemment que nous avons rigueur formelle profit qui passe nous avons donner dessins figures pour illustrer notre texte tout ayant conscience avoir fait bien trop peu nous avons aussi rapprocher cet pratique des algorithmes chaque fois que nous avons des calculs dans lesquels nous explicitons les constantes dans grand sans connaissance desquelles les ont pas pratique peuvent trompeurs niveau requis pour lire livre est seulement une bonne avec mieux serait avoir auparavant cette perle rare est livre gantmacher gan peut recommander aussi grand classique toujours disponible lancaster tismenetsky est naturellement mais pas indispensable avoir une des concepts base binaire pour lesquels nous recommandons les ouvrages bdg ste enfin sur les algorithmes vous avez pas livre knuth knu parce que vous comprenez mal anglais que vous langue voltaire avant commencer lecture notre ouvrage une lettre tous les scientifiques leur demandant par quelle aberration traduction pas encore faite pour aller calcul formel nous recommandons les livres von zur gathen gerhard bini pan clausen shokrollahi bcs bur handbook computer algebra gkw nous que notre livre contribuera mieux faire saisir importance moment les constructives les solutions algorithmiques rapide commencent occuper plus plus une place essentielle dans enseignement des informatique des sciences remerciements nous remercions roy gilles villard pour leur relecture attentive leurs suggestions pertinentes ainsi que peter pour son aide concernant les deux derniers chapitres enfin qui nous fait avec une patience infinie son expertise latex table des table des vous vii rappels quelques notations formule rang cramer sylvester matrice adjointe formule samuelson valeurs propres minimal krylov cas matrices coefficients dans suites matrices hankel relations newton hadamard calcul modulaire normes matricielles chinois applications uniforme des inverse sur corps arbitraire viii table des algorithmes base pivot gauss transformations recherche pivot non nul formule dodgson hessenberg interpolation lagrange verrier variantes principe preparata sarwate principe algorithme version chistov principe version aux suites algorithme frobenius algorithme wiedemann circuits circuits programmes quelques circuit comme graphe circuits des divisions des divisions strassen principe des divisions calcul des partielles table des notions machines turing machines direct binaire les classes calculs faisables quand les solutions sont faciles tester comptage binaire binaire familles uniformes circuits machines direct une des calculs principe brent diviser pour gagner principe circuit binaire calcul des multiplication rapide des karatsuba transformation fourier usuelle transformation fourier rapide cas favorable cas anneau commutatif arbitraire produits matrices toeplitz multiplication rapide des matrices analyse strassen une famille uniforme circuits inversion des matrices triangulaires rang tensoriel une application exposant multiplication des matrices multiplicative extension corps base calculs approximatifs bini table des une sommes directes applications asymptotique rapide algorithme bunch hopcroft calcul inverse forme lignes leverrier algorithme csanky preparata sarwate galil pan sur anneau arbitraire algorithme berkowitz chistov applications des algorithmes tableaux des des tests tableaux comparaison les expressions expressions circuits descriptions des expressions des circuits plupart des sont difficiles universel permanent conjecture familles expressions circuits versus des circuits simulation des circuits expressions formes binaire versus table des universel permanent conjecture valiant annexe codes maple tables bibliographie index liste des algorithmes circuits liste des figures bibliographie index des notations index des termes programmes rappels introduction chapitre est des rappels insistant sur quelques aux notre est double une part fixer les notations donner les formules qui justifieront les algorithmes calcul ces objets autre part donner une des applications qui pourront ces calculs section fixe les notations rappelle formule binetcauchy ainsi que les cramer sylvester section est formule samuelson dans section nous minimal les krylov section est aux suites nous rappelons les aux sommes newton dans section section aborde les calcul modulaire enfin section est inverse ses quelques notations dans cet ouvrage est anneau commutatif unitaire corps commutatif pour deux entiers positifs quelconques ensemble des matrices lignes colonnes coefficients dans est pas unitaire peut toujours plonger dans anneau avec jac rappels soit aij une matrice ordre coefficients dans soit entier adopte les notations suivantes est matrice ordre det par est les aij par formule que dans cas une matrice coefficients dans corps autrement dit det est par det parcourt ensemble des permutations ensemble indices est signature permutation lorsque cela pas confusion nous noterons parfois lieu det est trace somme ses diagonaux comatrice est matrice dij chaque dij est cofacteur position dans dij det bij bij est matrice obtenue partir supprimant ligne colonne alors les formules det suivant ligne suivant colonne valables sur anneau commutatif arbitraire det aik dik akj dkj adj matrice adjointe est comatrice rappelons elle double adj adj det maintenant aij une matrice quelconque coefficients dans entier min mineur ordre est une matrice extraite supprimant lignes colonnes quelques matrice extraite sur les lignes sur les colonnes note det mineur correspondant plus avec avec notera matrice extraite sur les lignes resp les colonnes dont les indices sont ordre croissant dans resp dans les principales sont les dont diagonale principale est extraite celle appellera mineurs principaux les des principales comme cas particulier nous dirons que est une principale dominante son est mineur principal dominant aih aik est extraite ligne sur les colonnes pose aih ajh pour tous entiers tels aij que min mineur ordre obtenu bordant principale dominante par les coefficients correspondants ligne colonne qui fait par exemple que par convention pose aij parmi les qui restent valables dans anneau commutatif arbitraire doit citer premier par rapport chaque ligne par rapport chaque colonne plus importante est son annule deux lignes deux colonnes sont ces que une matrice change pas ajoute une ligne resp une colonne une combinaison des autres lignes resp des autres colonnes plus peut citer toutes les qui par exemple det det det encore qui peut pour une matrice ordre comme une famille ces sont lorsque les coefficients sont elles sont donc vraies dans tous les anneaux commutatifs variables coefficients entiers est identiquement nul seulement annule sur rappels formule est une formule qui det det det cas produit avec pour chaque extrait ordre croissant matrice extraite sur les colonnes matrice extraite sur les lignes alors formule gan det det det somme comporte termes pour pose aij bij utilise les des pour obtenir suite suivantes avec des notations bim bim det bim bim parmi les termes cette somme que termes qui risquent pas nuls sont pour chacun des avec les termes correspondant aux tels que sont des matrices ordre cause fait ayant deux colonnes identiques est nul est ailleurs raison simple pour laquelle det lorsque quelques regroupe ces termes sommes partielles chaque somme partielle correspond une valeur comporte termes dans lesquels peut mettre facteur det suffit alors utiliser matrice pour voir que somme partielle correspondant est autre que det qui donne det det det rang cramer une matrice est dite elle est inversible son est inversible dans son est nul une matrice non est dite fortement toutes ses principales dominantes sont tous les mineurs principaux dominants sont inversibles dans anneau base lorsque est par rang une matrice quelconque coefficients dans ordre maximum des mineurs non nuls utilisant les notations nous rappelons maintenant quelques dont certains seront comme nous travaillerons souvent avec anneau commutatif arbitraire nous aurons besoin notion module qui est aux anneaux notion espace vectoriel sur corps module est par groupe loi groupe est muni une loi externe qui les axiomes usuels pour tous rappels nous dans cet ouvrage que des modules libres dimension finie isomorphes parfois module dans cas est anneau peut dans son corps des fractions qui est tout module libre dimension finie isomorphe module peut comme inclus dans espace vectoriel isomorphe rang une matrice est alors son rang usuel que ses coefficients sont dans corps dans suite chaque fois que sur doit intervenir elle sera clairement soit anneau pour toutes matrices ordre coefficients dans adj preuve provient fait que sont deux endomorphismes espace vectoriel dimension finie alors dim dim ker ker fait que adj tous les mineurs ordre sont nuls anneau rang matrice adjointe toute matrice est plus les adj adj preuve est une sachant que par adj det soit anneau commutatif arbitraire pour tous entiers toutes matrices implication det det adj preuve supposons tout abord anneau nous permet alors det adj adj conclure quelques voyons maintenant cas est pas par cas pour cela anneau les sont des variables les des matrices est bien connu que det est dans cet anneau que donc est anneau comme dans cet anneau det est cas det adj dans ceci prouve existence det adj det peut par algorithme division exacte dans anneau cette est dans tout anneau commutatif qui permet conclure dans cas proposition cramer soit anneau commutatif arbitraire notons colonne matrice extraite supprimant colonne det supposons que tous les mineurs ordre sont nuls soit extrait ordre croissant soit matrice extraite gardant les lignes soit det alors preuve pour premier point est matrice obtenue collant dessous ligne ceci voit selon ligne cette est donc nulle point prouve analogue ces deux peuvent relues sous forme solution pour note matrice obtenue partir rappels colonne par det alors relit sous forme plus classique adj det peut relire comme suit une matrice avec suppose que tous les mineurs ordre sont nuls note toujours colonne choisit extrait ordre croissant matrice obtenue partir colonne par pose det applique avec matrice obtient det proposition soit anneau commutatif arbitraire non trivial dans lequel les suivantes sont pour tout existe tel que autrement dit application par est surjective existe tel que existe une combinaison des mineurs ordre qui est preuve soit vecteur base canonique soit vecteur tel que prend pour matrice matrice dont les colonnes sont les prend montrons que est impossible tel est cas rajoute colonnes nulles droite lignes nulles dessous obtient deux matrices pour lesquelles det det qui donne pour combinaison applique formule avec supposons det somme est tous les posons adj alors det suffit donc prendre quelques sylvester une sylvester soit anneau commutatif quelconque pour tout entier toute matrice aij det ann det adj preuve pour obtenir cette formule suffit det ann suivant ligne puis chacun des cofacteurs des intervenant dans suivant colonne qui est autre que nous allons voir maintenant que peut autres partitions effet deux entiers avec associe toute matrice partition suivante matrice blocs alors suivant valable pour tout anneau commutatif unitaire proposition sylvester avec les notations cidessus pour tous entiers tels que les suivantes dans lesquelles adjar det rappels preuve matricielle pour qui est une peut restreindre cas les coefficients aij sont des les mineurs peuvent alors vus comme des non nuls dans anneau aij est alors obtenue prenant les des deux membres simplifiant par premier membre est que celui des seconds membres provient fait que ligne colonne matrice det adj est aij det adj qui est autre que matrice aij est cas particulier pour remarque fait par ann donne exactement formule qui permet affirmer que est une sylvester les sylvester seront dans section pour calcul des par appelle matrice une matrice matrice xin une sur est par matrice notera det xin serait fait plus pratique comme fait bourbaki comme xin mais nous nous tenons usage plus notons que det que pour coefficient est produit par somme tous les mineurs diagonaux ordre particulier aii matrice adjointe appelle matrice adjointe matrice adj xin adj xin est signe adjointe matrice elle peut vue comme matriciel coefficients dans effet matrice des cofacteurs xin est une matrice dont les diagonaux sont des les autres des qui fait que matrice adjointe page xin posera ainsi est divisible par xin sens division euclidienne dans anneau pour obtenir les coefficients matriciels quotient dans cette division applique horner matriciel pour convaincre peut examiner formule qui donne par det xin voir quels sont les produits qui contiennent peut aussi faire une preuve par sur det xin suivant colonne encore horner est rien autre une misei forme algorithmique division euclidienne par reste est alors obtenu sous forme ceci consitue une efficace utilisant minimum multiplications cette est fait identique celle chine elle par ruffini horner voir encyclopedia mathematics chez kluwer rappels avec constante reste est nul qui donne fournit passant une horner peut par suivant avec cela donne une rapide efficace pour calculer partir des coefficients matrice adjointe fournit les relations suivantes pour formule samuelson page notons fin horner obtient encore det det est inversible dans alors inverse qui peut par formule notons que adj donc adj ainsi calcul matrice adjointe nous donne adjointe nous permet aussi obtenir inverse existe notamment dans les cas pivot gauss impraticable qui produit lorsque anneau contient des diviseurs matrice adjointe sert comme nous verrons plus loin voir page suivantes calculer par dans certains cas des vecteurs propres non nuls nous allons utiliser pour important pour suite faisant objet paragraphe suivant formule samuelson utilisant les relations dans lesquelles remplace les coefficients par les coefficients sachant que obtient adj cette nous sert formule samuelson voir gas partitionnement proposition formule samuelson soit anneau commutatif arbitraire aij prn entier notons det xir principale dominante posons sorte que matrice est comme suit ann alors encore preuve tout abord applique sylvester matrice xin obtient det xir adj xir ensuite applique par par adj xir formule samuelson sera dans algorithme berkowitz rappels valeurs propres supposons anneau soit son corps des fractions une extension dans laquelle produit facteurs premier une telle extension est appelle corps tout dans est valeur propre est par valeur propre plus est pas est parfois utile envoyer dans corps par homomorphisme anneaux unitaires les valeurs propres matrice dans seront alors par les def soit matrice les lem suivant exprime alors particulier lien entre les valeurs propres matrice celles matrice dans cas anneau base est lem soit anneau une extension corps des fractions avec les alors particulier pour tout preuve suffit montrer premier point dans corps qui contient toutes les valeurs propres matrice peut une forme triangulaire existe une matrice triangulaire avec les sur diagonale une matrice inversible telles que comme est que est triangulaire forme matrice sera aussi triangulaire forme plus puisque est automorphisme par suite anneau qnl fait est pas que les deux matrices soient semblables pour conclure que suffit pour cela que comme indique suivant valable dans anneau commutatif arbitraire soient deux matrices coefficients dans ayant alors pour tout soit matrice compagnon unitaire matrice sans que suffit montrer que pour conclure puisqu alors aussi peut est une anneau muni une loi externe produit une matrice par scalaire qui les est automorphisme cette structure homomorphisme bijectif anneau qui plus que pour tout rappels agit alors montrer que les coefficients les comme sur deg revient dans ces sont ensuite valables dans tout anneau commutatif les variables formelles par des elles donnent pour ces suffit les sur ouvert lorsqu substitue arbitraire pour cela par exemple ouvert correspondant des matrices suffisamment proches matrice diagonale suivante ces matrices sont diagonalisables puisque leurs valeurs propres restent distinctes dans ouvert dans sont diagonalisables avec les valeurs propres est trivial minimal soit corps espace vectoriel dimension par une matrice dans une base minimal krylov minimal est par unitaire engendrant des tels que priori peut par les usuelles cherchant relation entre les vecteurs successifs ide espace vectoriel end des endomorphismes minimal puisque obtient que divise par ailleurs vecteur par dit aussi krylov pour couple est par par vecteurs note dimension est autre que unitaire qui engendre appelle les deux appartiennent cet ainsi existence vecteur qui suffit pour que endomorphisme soit signe son minimal mais est aussi vraie comme nous allons voir remarquons que comme peut par les usuelles rappelons maintenant classique suivante relative sous espaces soit endomorphisme tel que avec pgcd pour posons ker chaque est note restriction alors restriction induit automorphisme espace vectoriel les preuve tout abord remarque que deux endomorphismes forme commutent toujours puisque ensuite est clair que tout type ker est rappels preuve des points est alors sur les sur qui implique ide qui lit ide choisit dans chaque une base leur est une base matrice sur est diagonale parqblocs chaque bloc matrice sur ceci implique soit est clair que car nul sur chaque que les sont premiers entre eux car divise fortiori donc est multiple des par suite multiple ceci termine preuve point preuve point est analogue par soit prmr minimal facteurs premiers distincts reprend les notations avec pimi pose outre ker ker pimi pour alors chaque est strictement inclus dans pour tout les suivantes preuve fait que des est claire condition signifie exactement que divise pas strictement minimal chose pour tout diviseur strict pimi divise pimi cette remarque montre aussi que inclusion est stricte corollaire existe toujours vecteur tel que particulier minimal pour dimension autrement dit signe est existe des vecteurs qui preuve suffit prendre avec chaque dans minimal ainsi sauf exception choisit dehors petit nombre vectoriels stricts les ker cette remplit jamais espace corps base est fini vecteur convient notons que preuve est peu satisfaisante parce sait pas calculer facteurs premiers ensuit que notre preuve existence vecteur qui reste plus que pratique voici une contourner cet obstacle lem suivant lem qui est facteur strict peut bien produit deux bien sous forme que nous donnons pas consiste partir factorisation intitiale raffiner maximum utilisant les pgcd ensuite rappelle que les peuvent facilement par les classiques alors avec non nul arbitraire est sinon applique lem avec dans premier cas applique avec les est dans deux espaces dimensions plus petites dans cas choisit nouveau dans ker qui fait que son minimal augmente strictement remarque toute donne une forme dans une base convenable par une matrice diagonale par blocs forme sont les matrices des endomorphismes induits par dans les certaines ces formes sont rappels dites canoniques comme jordan dans cas factorise facteurs sur existe autres formes canoniques rationnelles qui utilisent pour changement base que des expressions rationnelles les coefficients matrice sur sujet des formes normales sur bien autres nous recommandons livre gantmacher gan dont attend toujours prix abordable cas matrices coefficients dans dans certains algorithmes que nous aurons par suite nous partirons une matrice coefficients dans anneau bien souvent sera avantageux aucun des calculs produise des qui seraient dans corps des fractions sans dans pose alors naturellement question suivante les que nous pouvons envisager comme calculs vue trouver toujours des coefficients dans dans cas matrices coefficients dans dans anneau est positive est pas est sur les les qui suivent pour lesquelles peut consulter les livres classiques par exemple gob mrr anneau est dit clos tout diviseur unitaire dans unitaire est dans avec tels anneaux les sont donc automatiquement coefficients dans anneau est dit anneau pgcds tout couple admet pgcd tel que divise divise pour anneau soit clos suffit que qui soit pour les diviseurs suites autrement dit tout dans unitaire est dans tout anneau pgcds est clos est anneau pgcds pour suites soit espace vectoriel resp module une suite entier une relation ordre pour cette suite est par resp dans resp dans est suite lorsque coefficient est inversible suite est alors par car elle peut ensuite construite par elle est quelque sorte par qui justifie terminologie une suite dans est une suite qui dont coefficient dominant est inversible cette situation suivante appelle espace vectoriel resp module toutes les suites valeurs dans note qui donne pour image suite cran est clair que est dire que suite relation traduit exactement langage peu plus abstrait par ker cela montre que les une suite forment resp dans cas corps une suite comme est anneau principal cet non nul est par unitaire rappels unique appelle minimal simplement minimal suite nous noterons nous allons voir plus loin que peut effectivement suite maintenant unitaire dans resp dans espace vectoriel resp module ker des suites dans pour lesquelles est sera est isomorphe resp une base canonique est fournie par les suites telles que pour est symbole kronecker pour une suite arbitraire dans alors est clair que est stable par notons restriction constate que matrice sur base canonique est matrice compagnon particulier outre par simple application des obtient comme exemples importants suites peut citer suite des puissances une matrice dont minimal est autre que minimal matrice pour des vecteurs les suites dont les minimaux respectivement divise sont alors tels que divise chacune ces suites est par ses premiers termes suites suites matrices hankel comme montre discussion qui suit des suites est celle des matrices hankel une matrice hankel est une matrice pas vij dont les coefficients sont constants sur les diagonales montantes vij vhk les matrices hankel fournissent exemple matrices autre exemple plus important est celui des matrices toeplitz celles dont les coefficients sont constants sur les diagonales descendantes vij vhk remarquons une matrice hankel ordre est une matrice que les produits une matrice hankel ordre par matrice hankel sont des matrices toeplitz cette matrice permutation ordre permet renverser ordre des colonnes resp des lignes une matrice lorsque est droite resp gauche par matrice est pourquoi appelle matrice renversement encore matrice arabisation fait elle permet droite gauche les colonnes que lit gauche droite inversement inversement les produits une matrice toeplitz ordre par matrice sont des matrices hankel une matrice est par beaucoup moins coefficients une matrice ordinaire taille par exemple une matrice hankel resp toeplitz type est par coefficients ceux des ligne resp colonne cela rend ces matrices importantes pour les grands calculs est une suite arbitraire nous noterons rappels matrice hankel suivante qui lignes colonnes fait suivant est une simple constatation fait reprenant les notations une suite est une suite avec comme seulement sont pour tous les matricielles qui revient transposant naturellement suffit que ces soient lorsque donc aussi fait sous les dans toute matrice les colonnes sont combinaisons des par transposition dans toute matrice les lignes sont combinaisons des proposition suivante proposition avec les notations dans cas corps est une suite qui admet pour son minimal est rang matrice hankel les coefficients sont unique solution cpa encore unique solution relations newton preuve relation entre les colonnes appelons unitaire correspondant qui cela donne sur par donc fortiori pour tout cela signifie que est pour suite laisse soin lectrice lecteur finir preuve relations newton soit anneau commutatif unitaire sur des tout unique comme une somme finie distincts xjnn xjnn somme porte sur une partie finie souvent donner bon ordre sur les termes xjnn pour faire des preuves par induction suffit par exemple ordonner les pour bon ordre sur les termes par exemple ordre lexicographique ordre lexicographique total souvent aussi comme somme ses composantes rappels est total composante une simple voir les composantes est une nouvelle composante est autre que coefficient dans par groupe des permutations est dit son stabilisateur par action sur est groupe tout entier encore les une orbite figurent dans expression avec coefficient des importants sont les sommes newton xki notera sym ensemble des sur est une propre est bien connu peut faire par utilisant ordre lexicographique que tout exprime unique comme les sont les cela signifie que homomorphisme sym par est isomorphisme outre lorsque est corps des fractions cet isomorphisme prolonge unique isomorphisme vers sym qui est par des fractions rationnelles invariantes par permutation des variables autrement dit relations newton toute fraction rationnelle sur unique comme une fraction rationnelle sur exprime fait disant que est fondamental sur corps plus anneau commutatif unitaire resp corps sur cet anneau resp corps appelle fondamental sur anneau resp fractions rationnelles sur corps tout sym resp sym sym resp sym attention langage fondamental sur corps est pas fondamental sur anneau est par contre fondamental sur anneau est toujours fondamental sur corps fondamental sur corps implique garantit expression rationnelle dans fondamental toute fraction rationnelle sur les relations dites newton permettent exprimer les sommes newton dans fondamental des proposition relations newton les newton sont aux par les relations suivantes pour skp iii pour preuve pose les apparaissent dans rappels par logarithmique formelle obtient les suivantes dans formelles encore dans avec identifiant dans les termes pour obtient les formules les formules iii sont obtenues par identification des termes remarque notant les relations newton sous forme matricielle suivante les relations qui correspondent aux lignes dans matrice infinie iii qui correspondent aux lignes suivantes donnent formule fait autre part les relations iii peuvent obtenues directement multipliant par obtient relations newton lorsque les donnent alors par sommaj tion xkj hxn corollaire est anneau commutatif les entiers sont inversibles alors les sommes newton les forment fondamental sur anneau preuve triangulaire par les dans admet clairement une solution unique les corollaire soit unitaire une sur anneau soient les racines distinctes non dans corps une extension pose pour les relations ainsi les coefficients sont unique dans par ses sommes newton est non diviseur dans division exacte par chacun des entiers est explicite lorsqu elle est possible calcul des partir des est lui aussi explicite dans suivante les sommes newton pour les cas anneau commutatif arbitraire soit unitaire une sur anneau commutatif alors les par les appellent les sommes newton tout avec non diviseur dans fait important est suivant rappels lem soit anneau commutatif arbitraire une matrice ordre sur son les sommes newton dans sont les traces des puissances matrice pour preuve remarque que les sont des les matrice comme des suffit donc traiter cas anneau dans cas est par corollaire lem hadamard calcul modulaire nous abord quelques majorations utiles normes matricielles aij est une matrice coefficients complexes classiquement les normes suivantes cia hxm hxn max max chacune ces normes les relations classiques kak ont dimensions kak produit est maintenant des matrices coefficients entiers taille entier est espace occupe lorsqu implante sur machine codage des entiers est standard cela veut dire que taille est correctement par notation dans tout cet ouvrage log max hadamard calcul modulaire lorsque est entier cela donc taille une constante log avec une des normes taille chaque coefficient est clairement par une constante additive outre les relations impliquent que max ces relations sont souvent utiles pour calculer des majorations taille des entiers qui interviennent comme calculs matriciels hadamard hadamard applique aux matrices coefficients valeur absolue volume dimensionnel construit sur les vecteurs colonnes matrice donc elle est par produit des longueurs ces vecteurs det aij des preuves rigoureuses fait intuitif par exemple processus orthogonalisation remplace matrice par une matrice dont les sont deux deux orthogonaux sont pas plus longs que ceux matrice initiale signification cette preuve est suivante processus orthogonalisation remplace construit sur les vecteurs colonnes par droit volume dont les sont devenus plus courts raisonnement donne dans cas une matrice coefficients complexes par mais directe avec les normes obtient pour une matrice det det avec norme frobenius obtient majoration suivante produit positifs dont somme est constante est maximum lorsqu ils sont tous det rappels chinois son application aux calculs modulaires soient des entiers positifs deux deux premiers entre eux pose pour toute suite entiers relatifs existe entier unique modulo mod peut calculer cet entier modulo remarquant que pour tout compris entre les nombres sont premiers entre eux que par existe des entiers relation tels que nombre est autre que modulo est facile bien question une des importantes chinois calcul formel est son utilisation pour calcul coefficients entiers dont sait majorer valeur absolue par entier strictement positif arrive souvent que les calculs lorsqu ils sont avec sans division dans donnent des coefficients dont taille explose rapidement qui risque rendre ces calculs impraticables trop alors que taille final est bien plus petite supposons que ait calculer tel que par algorithme sans divisions commence par choisir des entiers positifs deux deux premiers entre eux dont produit strictement lieu calculer directement effectue tous les calculs modulo pour chaque les ainsi obtenus sont tels que est dans classe modulo pour utilisant les notations que pour les coefficients relatifs aux couples ensuite principal partir des partiels remarquant que est entier relatif plus petite valeur absolue congru modulo puisque dans cas algorithme avec divisions les facteurs doivent choisis ils soient premiers avec les diviseurs intervenant dans les calculs pour calcul des matrices coefficients entiers par exemple peut utiliser hadamard pour faire fonctionner modulaire prendra pour borne cela peut choisir une des bornes dans les est pour calcul hadamard calcul modulaire une matrice car chacun des coefficients est une somme mineurs diagonaux matrice peut donc aussi utiliser hadamard pour majorer les valeurs absolues des coefficients vue traitement modulaire plus prend comme cidessus par cnk les mineurs diagonaux ordre pour entre alors coefficient terme est valeur absolue comme suit cnk puisque cnk quelques pratiques principale dans utilisation calcul modulaire est remplacer algorithme dans permettant par plusieurs algorithmes modulo des nombres premiers pour vraiment efficace cette doit avec des listes nombres premiers pour lesquelles les coefficients correspondants qui permettent partir des ces nombres premiers peuvent choisis par rapport taille des mots par les processeurs par exemple pour des processeurs qui traitent des mots bits prend des nombres premiers compris entre suffisamment bien plus que nombres pour dans pratique tous les taille humainement raisonnable outre des tests rapides pour savoir nombre est premier cela permis des listes avec liste des coefficients correspondants qui tous les cas qui posent pratique chaque modulo tel nombre premier fait alors temps constant qui temps calcul outre algorithmes modulaires offre utiliser plusieurs processeurs rappels uniforme des nous expliquons ici comment permet rang une matrice avec une seule formule type cramer les ayant format rang ceci sur corps arbitraire cette solution uniforme constitue une extension mulmuley qui traite que question rang naturellement rang une matrice peut par pivot gauss mais est pas uniforme priori laisse pas bien les applications des formules algorithmes que nous allons ici seront deux ordres une part calcul autre part lorsqu doit traiter des dans cas figure pivot gauss produit arbre calcul qui risque comporter grand nombre branches correspondant grand nombre formules distinctes lorsque les prennent toutes les valeurs possibles cas est celui toutes les une matrice sont des par exemple avec une matrice rang maximum format solution correspondant par pivot gauss mineur maximal non nul extrait mineurs ordre dernier peut importe lequel des matrice analyse matricielle avec des matrices coefficients complexes une formule uniforme compacte rang est obtenue par utilisation des coefficients gram matrice correspondant dans cas gram ordre une matrice est somme des tous les mineurs ordre son annulation signifie que rang matrice pas les que nous allons obtenir sont des directes des formules usuelles qui expriment inverse fonction des coefficients gram matrice est que sur corps fini petit nombre sommes mineurs suffit rang une matrice que des formules semblables aux formules usuelles fonctionnent encore cependant prix payer non qui est introduire dans les calculs uniforme des les coefficients gram inverse dans cas complexe dans toute section est une matrice dans avec sur des bases une application entre espaces vectoriels hermitiens euclidiens dimension finie nous noterons produit scalaire des vecteurs nous notons dans cas matrice sur les bases application adjointe par yif les matrices sont des matrices hermitiennes positives positives dans cas non est vectoriel nous noterons projection orthogonale sur vue comme application dans point vue pure tous les qui suit sont sur des espaces sommes directes noyaux images lem nous avons deux sommes directes ker ker cela fait que ker resp ker est orthogonal resp qui est une directe nous les faits suivants fait application restreint isomorphisme sur restreint isomorphisme sur outre ker ker ker ker pas confondre avec matrice adjointe adj cette dans terminologie est ennuyeuse rappels soit automorphisme par est restriction nous avons det idim det idf nous avons automorphisme det idim det ide sont des directes lem cela sera plus clair nous dans les sommes orthogonales ker ker les coefficients gram sont les par formule det nous aussi pour notez que est det les coefficients gram sont donc signe les coefficients lem conditions gram pour rang application est rang seulement pour elle est rang outre coefficient gram est nombre positif nul somme des des modules des mineurs ordre matrice suffit pour certifier que rang est uniforme des preuve premier point est une directe fait pourrait aussi comme une second point que nous maintenant coefficient est somme des mineurs principaux ordre chaque mineur principal ordre est obtenu comme matrice correspondante qui est extrait est matrice extraite gardant seulement les lignes correspondant formule nous indique alors que est somme des des modules des mineurs ordre extraits nous supposons donc puisque det idim det idf nous donne idim par suite obtient par dans formule ainsi lem projections orthogonales sur image sur noyau projection orthogonale sur est projection orthogonale sur est projection orthogonale sur noyau est ide outre implique que inverse est par idim rappels puisque cela donne ide supposons que est rang inverse rang est application par remarque nous avons pas parce que membre est priori mal est une application est une application dans dans est une application dans voit que nous obtenons puisque appliquant obtient alors une formule uniforme rang qui donne une solution des analyse matricielle proposition inverse soit soit application par inverse est par ide idf nous avons seulement seulement dans cas est unique solution dans uniforme des remarque voici formulation matricielle lem proposition soient dans avec min une matrice rang posons soit matrice projection orthogonale sur est matrice projection orthogonale sur est celle projection orthogonale sur noyau est matrice inverse rang est admet une solution seulement est matrice obtenue juxtaposant colonne droite matrice seulement dans cas est unique solution dans espace notez que matrice est bien par formule que est rang cela est utile analyse plus chaque fois que les coefficients sont des connus avec seulement une finie qui peut introduire une incertitude sur rang matrice cas des matrices hermitiennes lorsque endomorphisme est dit hermitien alors une orthogonale ker restriction est automorphisme nous posons det ide det rappels signe les sont donc les coefficients rang est alors det ide det idim ainsi par idim par nous obtenons ceci donne pour cas des matrices hermitiennes une version des plus elle trouve dans ouvrage bini pan proposition inverse cas hermitien projection orthogonale sur est inverse est remarquez que peut sont les valeurs non nulles les racines des valeurs propres existe des bases par rapport auxquelles matrice est uniforme des cia matriciellement obtient sont des matrices unitaires orthogonales dans cas convenables ceci appelle valeurs svd anglais voit que transforme dans avec pour longueurs des axes principaux dans ces conditions matrice est celle est format que bien que les matrices soient unique continument sous que rang est pas pour les matrices valeurs qui sont fondamentalement instables que vecteur appartienne non toujours qui est orthogonal ker est projection orthogonale sur ainsi lorsque est pas dans image inverse fournit une solution qui donne pour meilleure approximation sens des moindres outre est plus petite norme parmi les solutions qui cette meilleure approximation qui est remarquable est arrive calculer essentiellement aide les projections orthogonales inverse par une formule uniforme plus exactement par une formule qui que rang lequel lit sur question sans ait besoin calculer les bases dans lesquelles application rappels sur corps arbitraire dans cas complexe termes tout paragraphe est par les sommes directes entre les noyaux images lem ker ker suffit effet lorsqu parle projection orthogonale remplacer par exemple expression projection orthogonale sur par projection sur ker nous allons voir maintenant que ces relations peuvent automatique sur corps arbitraire condition introduire place une matrice coefficients dans corps est une pour cela nous nous limitons point vue purement matriciel est point vue des bases ont dans nous une forme quadratique sur une forme quadratique sur nous notons les produits scalaires correspondants par nous notons les matrices diagonales ces formes sur les bases canoniques toute application donne lieu une application que nous notons encore qui est par matrice sur les bases canoniques existe alors une unique application qui les dans nouveau contexte yitf ite matrice sur les bases canoniques est alors puisqu doit avoir pour tous uniforme des pratique obtient par exemple comme nous avons pour pouvoir reproduire avec les variations fait les les lemmes proposition nous suffit analogue lem lem avec les notations pour toute matrice des sommes directes orthogonales dans les espaces ker ker preuve les dimensions conviennent suffit montrer que intersection est prenons par exemple relation implique que orthogonal sens forme est ker nous suffit donc montrer que est vectoriel sur son orthogonal dans sens produit scalaire coupe soit donc existe tels que quitte multiplier par produit des peut supposer que les sont des donc aussi les peut introduire une nouvelle travailler dans alors puisque est orthogonal tous les est orthogonal cela donne nous reste voir que cette relation implique que les sont tous nuls supposons des non nul soit plus grand des des soit plus grand indice pour lequel deg coefficient dominant alors facilement que coefficient dans est donc est non nul nous nous contentons maintenant reproduire les dans notre nouveau cadre rappels fait application restreint isomorphisme sur restreint isomorphisme sur outre ker ker ker ker soit automorphisme par est restriction nous avons det idim det idf nous avons automorphisme det idim det ide les coefficients matrice sont des laurent autrement dit des les gram sont les laurent les coefficients gram sont les coefficients par formule det nous aussi pour que dans suite cette section nous dirons pour place laurent laissant lecteur soin selon contexte des puissances variable sont non notons que les coefficients gram usuels sont par uniforme des les coefficients gram sont des sommes mineurs ils permettent rang matrice vertu lem suivant qui est analogue lem lem conditions gram pour rang soit matrice est rang seulement les pour sont identiquement nuls elle est rang outre est somme des coefficient gram des mineurs ordre matrice extraits sur les lignes les colonnes correspondant aux pour toutes les paires qui particulier inf posant inf sup nombre total des coefficients gram est nous avons les analogues lem proposition lem projections sur image sur noyau soient dans avec min une matrice rang posons matrice projection sur ker est matrice projection sur ker est matrice projection sur noyau est rappels notez agit projections orthogonales par rapport aux formes remarque fait chaque formule peut par importe quelle valeur qui annule pas qui est toujours possible corps moins supposons que est rang inverse rang est application par proposition inverse soient dans avec min rang inverse rang est matrice admet une solution seulement est identiquement nul seulement dans cas est unique solution dans remarque est injective est par vecteur colonne est unique solution correspondant pas les fractions rationnelles par calcul des simplifient des constantes cas des matrices dans paragraphe est par une matrice soit automorphisme defini par par rapport base canonique uniforme des celle matrice est est puisque est ker ker ker ker ker ker donc peut comme deux orthogonales par rapport forme ker ker ker ker nous notons automorphisme obtenu par restriction les applications ont rang somme directe ker que det det idi avec les sont signe les coefficients vient version lem ceci constitue mulmuley lem conditions mulmuley pour rang une matrice avec soit soit coefficient dans pae alors matrice est rang seulement les pour sont identiquement nuls elle est rang outre puisqu somme orthogonale ker peut reproduire les calculs dans cas des matrices obtient suivant qui simplifie ceux obtenus dans cas une matrice arbitraire similaire proposition proposition inverse une matrice soit rang application par matrice rappels les coefficients sont par dans suite entend par rapport forme projection orthogonale sur pour matrice inverse pour matrice endomorphisme dont matrice est est inverse sens suivant application est projection sur ker est projection sur ker pour tout vecteur colonne admet une solution seulement abv cas positive est unique solution dans espace preuve reste prouver point posons sait que puisque cela donne tout suite les deux pour tout reste suit sans pour des inverses nous recommandons les livres bha par les cramer supposons matrice rang dans espace par les colonnes appelons colonne soit det mineur ordre matrice extrait sur les lignes les colonnes pour soit matrice extraite ceci que colonne par colonne extraite sur les lignes uniforme des alors obtient pour chaque couple une cramer due fait que rang matrice est page ceci peut relire comme suit adj adj notons rappelons que nous multiplions chaque par nous additionnons toutes ces nous obtenons une expression forme adj cette formule ressemble beaucoup trop dans proposition pour pas due une ainsi inverse peut comme une somme cramer nous prouverons cependant pas cette peut trouver dans cadre plus comme dans avec formulation ici dans algorithmes base introduction agit dans chapitre analyser certaines plus moins classiques pour calcul coefficients dans anneau commutatif objectif est comparer ces algorithmes meilleur possible plus rapide pratiquement occupant moins espace possible notamment explosion taille des plus facilement sur machine plus applicable dans anneau commutatif arbitraire nous introduirons plus loin chapitre des notions dans chapitre nous nous contenterons notion informelle par compte nombre dans anneau base lors algorithme nous ferons quelques commentaires souvent informels sur bon non taille des nous par algorithme pivot gauss pour calcul est algorithme plus classique fonctionne sur corps nombreuses applications solutions calcul inverse pivot pour des est fait due aux savants chinois pourra consulter sujet notice historique chapitre dans ouvrage schrijver sch ainsi que plus karine chemla notre trouve dans les commentaires liu hui sur texte classique les neuf chapitres semble appeler une algorithmes base nous continuons avec algorithme qui pour grande algorithme qui peut comme une adaptation pivot gauss avec meilleur comportement des coefficients cet algorithme fonctionne sur anneau commutatif condition que les divisions exactes soit pas trop dans cas calcul devient algorithme sans division applique sur anneau commutatif arbitraire est que nous appelons une variante algorithme due dodgson alias lewis caroll offre des perspectives dans cas des matrices nous ensuite algorithme hessenberg couramment analyse utilise des divisions par des non nuls arbitraires suppose donc travaille sur corps mais calcul formel veut des exacts pose croissance taille des nous signalons interpolation lagrange dans laquelle calcul calcul plusieurs nous examinons ensuite des qui utilisent des divisions uniquement par des nombres entiers petite taille agit verrier son nous continuons avec les sans division chistov plus nettement plus efficaces que elles fonctionnent sur anneau commutatif arbitraire algorithme chistov les que celui mais handicap par rapport dernier dans les tests nous terminons avec les qui utilisent les suites celle que nous appelons frobenius bonnes tant point vue nombre que taille des elle applique cependant pas toute hormis cas des corps finis elle est pratique par algorithme berkowitz sans doute parce que dernier utilise pas division besoin moins espace meilleur des nous exposons une variante due wiedemann dans chapitre nous nous seulement des versions preuve correction algorithme pivot gauss dans texte ancien pivot gauss assez simples des algorithmes nous dirons une version algorithme est les multiplications matrices nombres entiers qui interviennent son sein sont selon classique usuelle dite parfois pour les matrices les multiplication usuelle consiste appliquer simplement formule produit pour multiplication des entiers agit algorithme apprend primaire autre part nous parlons versions dans mesure les qui cherchent lorsque nombreux processeurs sont sont pas dans chapitre nous que des versions rappelons enfin convention importante suivante dans tout cet ouvrage notation log signifie max pivot gauss est plus plus courante aussi bien pour calcul exact que pour calcul des lorsque les coefficients appartiennent corps dans lequel les base ainsi que test effectuent par des algorithmes son non seulement dans fait elle plusieurs variantes symboliques jouant important dans inversion des matrices dans des mais aussi dans fait que technique pivot est dans autres comme celle pour calcul comme nous verrons plus loin avec par exemple les hessenberg transformations une matrice est dite triangulaire resp triangulaire les resp dessus diagonale principale sont nuls dit matrice triangulaire lorsque contexte rend clair quelle variante agit une matrice triangulaire algorithmes base est dite unitriangulaire les coefficients sur diagonale principale sont tous sur des successives des inconnues dans pivot gauss consiste une matrice une matrice triangulaire par une succession transformations sur les lignes sur les colonnes les transformations sur les lignes une matrice sont trois types multiplier une ligne par non nul deux lignes iii ajouter une ligne produit une autre ligne par analogue les transformations sur les colonnes associe toute transformation sur les lignes sur les colonnes une matrice matrice dite obtenue effectuant cette transformation matrice matrice ordre selon cas toute transformation sur les lignes resp colonnes revient alors multiplier gauche resp droite matrice par matrice correspondante ceci est simplement fait que pour tout est clair que inverse une transformation sur les lignes resp colonnes est une transformation type sur les lignes resp colonnes analogues pour les colonnes pour tout pivot gauss une matrice application correspondante est dite unimodulaire elle est lorsqu veut limiter aux transformations qui correspondent produit par une matrice unimodulaire droit seulement celles type est facile voir une succession trois telles transformations permet obtenir lignes colonnes type qui est comme variante unimodulaire des transformations type les les transformations type sont transformations unimodulaires gauss proprement dite que nous ici est essentiellement une succession transformations type sur les lignes des lignes colonnes interviennent que lieu chercher pivot non nul pour ramener bon endroit chaque algorithme gauss consiste donc traiter pivot non nul issu faisant des pivot ensuite pivot suivante pour placer sur diagonale pivot les lignes colonnes obtiendrait donc une utilisant que des transformations unimodulaires fait est bien connu est une pivot gauss que toute matrice inversible est produit matrices toute matrice importe quel format peut par manipulations lignes colonnes une forme canonique type suivant avec lignes colonnes vides cette est une importance capitale citons par exemple gabriel roiter page qui donnent ailleurs dans leur chapitre des extensions son elles cette est utile son usage conduit des profonds limite aux transformations unimodulaires alors forme est que dans cas une matrice rectangulaire non inversible pour une matrice inversible algorithmes base faut modifier forme prenant son dernier coefficient diagonal non lorsque processus triangulation une matrice aboutit sans aucune permutation lignes colonnes intervienne qui lieu les principales dominantes rang sont garde les matrices aux transformations pivot gauss permet obtenir temps que triangulation est convenu appeler une une sous forme est une matrice triangulaire est forme triangulaire une matrice unitriangulaire est autre que inverse produit des matrices correspondant aux transformations successives sur les lignes pour une matrice existence une telle fait que processus triangulation arrive son terme sans aucun lignes colonnes elle aussi matrice puisque les mineurs principaux matrice sont autres que les produits successifs des pivots cours processus enfin toujours dans cas une matrice existence implique son cela serait plus cas pour une matrice comme peut voir ici nous donnons maintenant voir pivot gauss avec des matrices coefficients entiers exemples nous montrons deux exemples tous les pivots qui sur diagonale sont non nuls nous donnons les matrices premier est celui une matrice dont les coefficients entiers prennent pas plus pivot gauss que chiffres sur ligne les matrices ensuite matrice exemple est celui une matrice coefficients dans ont chiffre mais croissance taille des coefficients est spectaculaire sur ligne sur seconde algorithmes base nous allons comprendre ces comportements typiques exprimant les coefficients dans pivot gauss fonction extraits matrice initiale nous avons pour cela besoin les notations notation soit une matrice rang suppose que triangulation gauss aboutit son terme sans ligne colonne dans ces conditions pose note matrice issue note produit des matrices correspondant aux transformations cours sorte que est une matrice qui matrice que des colonne diagonale principale note position matrice celui matrice symbole kronecker est page notation page alors avec les notations les sont par les relations suivantes dans sinon sinon pivot gauss preuve les deux correspondent exactement aux affectations algorithme gauss les deux suivantes correspondent fait que les des sous matrices correspondantes sont par les transformations lignes dans algorithme est clair que matrice obtenue issue algorithme gauss dans cas est bien forme triangulaire matrice que lij est une matrice triangulaire avec outre sinon que des pour effet matrice qui doivent par leurs tiplication gauche par produit affecte que colonne dernier identique colonne revient tout simplement remplacer par colonne remarquons aussi que relation montre comment algorithme pivot gauss permet calculer les mineurs principaux matrice donc son lorsqu elle est nous comprenons maintenant dans cas une matrice initiale coefficients entiers comportement typique taille des coefficients dans pivot gauss matrice exemple voit sur les relations que tous ces coefficients peuvent comme des fractions dont sont des mineurs matrice initiale outre les mineurs sont valeur absolue donc aussi taille sont des entiers utilisant hadamard grosso modo partant une matrice lignes avec des coefficients taille obtient dans algorithme pivot gauss des coefficients taille pour qui concerne une matrice coefficients dans comme pour obtenir une majoration taille des coefficients nous devons remplacer par une matrice coefficients entiers algorithmes base est ppcm des grosso modo partant une matrice lignes avec des coefficients dont sont taille obtient maintenant dans algorithme pivot gauss des coefficients taille algorithme pivot gauss algorithme pour pivot gauss applique pour les matrices fortement dans cas pas recherche pivot matrice est rang maximum inf cet algorithme remplace matrice par une matrice dimensions dont partie diagonale principale comprise est celle matrice partie sans diagonale celle matrice obtient algorithme algorithme algorithme pivot gauss sans recherche pivot une matrice aij fortement sortie matrice ainsi que les matrices comme variables locales piv pour inf faire piv app pour faire aip aip pour faire aij aij aip apj fin pour fin pour fin pour fin fait inf boucle principale que elle modifie alors que les valeurs des pour aurait donc pour inf faire mais aurait fallu rajouter fin sans les nuls position avec lorsque pivot gauss alors piv ann pour faire fin pour fin calcul donne suivant proposition nombre dans lorsqu algorithme pivot gauss est par qui donne pour majoration matrice est rang les premiers mineurs principaux dominants sont non nuls algorithme pour lorsque pivot piv est nul fournit encore cela donne algorithme page suivante exemple voici une matrice rang suivie des matrices obtenues partir algorithme algorithmes base algorithme algorithme pivot gauss sans recherche pivot une matrice aij sortie matrice ainsi que rang lorsque est ordre dernier mineur principal dominant non nul obtient dans cas matrice comme dans algorithme variables locales piv inf tant que inf faire piv app piv alors inf sinon pour faire aip aip pour faire aij aij aip apj fin pour fin pour fin fin tant que fin algorithmes avec recherche pivot non nul rencontre pivot nul sur diagonale principale cours processus triangulation doit des lignes colonnes pour ramener pivot position convenable reste non nul dans coin alors est pas une que obtient avec pivot gauss mais une voir par exemple ahu une produit droite gauche matrice par des matrices permutation plus issue processus triangulation obtient pivot nul alors deux choses une bien pour tous auquel cas rang est processus est bien peut trouver des pivot gauss entiers tels que dans cas une permutation lignes colonnes doit intervenir pour remplacer pivot nul par qui revient remplacer matrice par matrice eip ejp ekl matrice obtenue partir par des lignes qui revient par des colonnes cette qui subir avec pas les lignes les colonnes cette matrice plus elle commute avec les type traitement pivot qui correspondent produit gauche par une matrice triangulaire par exemple sur une matrice doit faire des lignes colonnes avant traiter les pivots obtiendra suivante donc ainsi processus triangulation gauss lorsqu une recherche pivots intervient processus sans recherche pivot sur produit droite gauche matrice par des matrices permutation cela montre aussi que algorithme pivot gauss matrice donne temps que rang matrice notons aussi que avec recherche pivot permet calculer dans tous les cas matrice elle est suffit garder mettre jour chaque des permutations lignes colonnes une matrice surjective cas particulier est par les matrices surjectives pivot non nul existe toujours sur ligne voulue cela donne algorithme page suivante algorithmes base algorithme une matrice surjective une matrice aij surjective sortie matrice elle donne les matrices comme dans algorithme matrice permutation signature variables locales piv pour faire piv app piv alors tant que piv faire piv apj fin tant que echcol echcol echcol est une qui les colonnes matrice fin pour faire aip aip pour faire aij aij aip apj fin pour fin pour fin pour fin ainsi lorsqu une matrice est surjective son rang est nombre ses lignes peut produit trois matrices est une matrice triangulaire avec des sur diagonale une matrice triangulaire une matrice permutation lup permet des comme calcul effet pour avec commence par puis enfin les deux premiers sont des triangulaires que peut par substitutions successives des inconnues donc dernier est une simple permutation des inconnues enfin det det selon permutation par matrice faut remarquer une matrice non surjective admet pas toujours lup comme par exemple matrice par ailleurs lup une matrice surjective est pas unique comme peut voir sur matrice qui admet les deux lup avec encore avec notons enfin que matrice obtenue dans est une matrice surjective fortement calcul inverse pivot gauss permet plusieurs matrice triangulant matrice aux seconds membres dans variante qui consiste poursuivre processus gauss bas haut droite gauche sur les lignes matrice annuler les dessus diagonale principale pivot gauss sert calculer inverse une matrice inversible lorqu applique cette matrice droite avec matrice ordre moyennant qui fait passer constante dans pivot gauss est une traitement automatique des dont les coefficients les inconnues sont dans corps cette fonctionne bien dans cas matrices coefficients dans corps fini dans une moindre mesure dans cas corps mais hormis cas des corps finis elle majeur une simplification algorithmes base des fractions veut pas voir taille des coefficients exploser qui souvent temps calcul prohibitif par exemple lorsqu travaille avec corps fractions rationnelles plusieurs variables outre cette utilise des divisions applique donc pas matrice ses coefficients dans anneau arbitraire nous allons voir dans cette section que connue aujourd hui sous nom bareiss qui peut comme une adaptation pivot gauss classique permet dans une certaine mesure pallier aux par cette bareiss connue jordan dur elle semble avoir par dodgson plus connu sous nom lewis caroll qui une variante dans nous sous nom nous nom dodgson variante lewis caroll que nous exposons fin section est valable dans cas anneau peut par algorithme addition multiplication division exacte quand quotient exact peuvent par des algorithmes cela signifie pour division exacte algorithme prenant couple donnant sortie unique dans cas existe formule variantes soit une matrice dans reprenant les relations puisque tous les coefficients relation avec pour est calculer directement les relit alors sous forme est que nous appellerons formule peut obtenir appliquant sylvester proposition page matrice aij avec min cela donne proposition formule soit anneau commutatif arbitraire pour toute matrice aij relation avec les conventions usuelles apj aij app aij aip aij aij variante suivante formule applique obtient proposition matrice aij proposition formule bareiss plusieurs soit anneau commutatif arbitraire pour toute matrice tout entier lorsque aij dans son article bareiss pouvait utiliser cette avec pour calculer les aij proche proche lorsque anneau est algorithme division exacte fait bareiss couramment aujourd hui est sur formule celle reiss permet effet calculer les aij proche proche est donc une adaptation pivot gauss qui garantit tout long processus triangulation matrice appartenance des coefficients anneau base cet algorithme tient que les coefficients sont tous des extraits matrice initiale donc restent taille raisonnable pour plupart des anneaux usuels algorithmes base algorithme utilisant relation obtient algorithme dans version seul recherche pivot rappelons les conventions pour inf algorithme algorithme une matrice aij anneau est avec algorithme division exacte sortie matrice les premiers mineurs principaux dominants sont non nuls est nul elle contient position mineur avec inf entier est aussi outre retrouve facilement partir sortie comme avant exemple variables locales piv den coe den inf tant que inf faire piv app piv alors inf sinon pour faire coe aip pour faire aij piv aij coe apj den fin pour fin pour fin den piv fin tant que fin retrouve facilement partir matrice par algorithme utilisant les formules page notons cij les coefficients cette matrice alors pour matrice lij cij lij sinon pour matrice uij cij uij sinon peut voir sur exemple suivant exemple dans cet exemple reprend matrice exemple donne ses par les algorithmes gauss comparons algorithme algorithme pivot gauss dans cas anneau lorsqu utilise algorithme pivot gauss dans corps des fractions sans les fractions fur mesure elles sont qui est est pas difficile voir que les des ont comportement exponentiel avec algorithme par contre les ont seulement une croissance remarque dans cas non fonctionnement algorithme sans recherche pivot reste possible tous les mineurs principaux app cours processus triangulation sont non diviseurs les divisions exactes aij aip apj app peuvent faire algorithmiquement algorithmes base cette condition est satisfaite lorsqu remplace matrice par matrice xin tous les pivots sont des unitaires signe donc algorithme xin fait intervenir que structure anneau aucune division dans est objet paragraphe suivant cas anneau commutatif arbitraire est matrice xin une matrice les coefficients xin sont dans anneau est pas les divisions exactes requises sont ici des divisions par des unitaires qui par aucune division dans mais uniquement des additions soustractions multiplications particulier aucune permutation lignes colonnes intervient cours processus triangulation permet donc calculer matrice par son son adjointe cas elle est inversible son inverse cette par sasaki murao les auteurs remarquent que dans calcul base algorithme type produit croix par pivot les sont pour pour pour peut donc passer calculer les coefficients des dans calcul quotient doit pas non plus encombrer des termes dans les restes successifs pour algorithme usuel division des ceci conduit aux suivants les coefficients des dans produit deux calculent utilisant usuelle division exacte par unitaire calcule utilisant usuelle une affectation dans algorithme jore lorsque est pivot unitaire consomme dans anneau base tout cas plus pour ensemble algorithme obtient nombre total proposition soit une matrice sur anneau commutatif arbitraire algorithme matrice utilisant usuelle dans anneau dodgson dodgson est une variante cependant son est pas calcul une matrice mais seulement celui ses mineurs connexes les mineurs avec particulier elle peut pour calcul une matrice une variante formule lignes colonnes est formule suivante concernant les mineurs connexes cela donne les affectations correspondantes dans algorithme dodgson mais dernier fonctionne uniquement tous les mineurs connexes servir sont non nuls contrairement pivot gauss dodgson pas variante connue efficace dans cas une affectation est produite par algorithme lewis caroll propose dans communication des permutations circulaires sur les lignes les colonnes matrice voici montrant que dodgson applique pas toujours matrice est une matrice inversible lequel peut pas calculer par lewis carrol lorsqu effectue des permutations circulaires lignes colonnes algorithmes base pour voir plus clairement que signifie lewis caroll appelons matrice extraite notons les indices alors exemple dans sans recherche pivot calcule tous les mineurs aij une matrice dans pivot gauss calcule les quotients aij aij matrice une structure interne comme dans cas des matrices hankel toeplitz structure est perdue dans dodgson calcule tous les mineurs connexes ordre matrice ensuit que dans cas une matrice les matrices par dodgson sont ceci diminue nombre effectuer fait passer dans cas anneau les divisions exactes sont faisables par algorithme obtient les avantages que dans algorithme concernant taille des coefficients algorithme dodgson pour une matrice hankel nous donnons ici une version algorithme dodgson pour les matrices hankel dont tous les mineurs connexes sont non nuls est algorithme page est une liste contenant les coefficients matrice hankel initiale sortie est tableau inf qui contient tous les mineurs connexes matrice suivant algorithme dodgson pour initialisation sur ligne des les mineurs connexes ordre sur ligne les coefficients les mineurs connexes ordre sur ligne les coefficients matrice hankel par les mineurs connexes ordre dans colonne les des qui ont coefficient sur leur diagonale ascendante algorithme algorithme dodgson pour une matrice hankel deux entiers une liste cette liste contient les coefficients une matrice hankel anneau est avec algorithme division exacte sortie tableau rempli pour inf contient sur ligne les mineurs connexes ordre matrice tous non nuls variables locales inf tableau vide taille voulue pour faire fin pour fin initialisation pour faire pour faire fin pour fin pour fin algorithme est pratiquement dans cas une matrice toeplitz suffit changer signe dans affectation peut appliquer pour calcul exemples dans premier exemple matrice hilbert ordre qui est exemple classique matrice hankel mal algorithmes base trice est inverse entier grand voici alors sortie algorithme dodgson ligne des voici ensuite exemple sortie algorithme avec une matrice hankel ordre coefficients entiers lisibles sur ligne hessenberg toutes les matrices ici sont coefficients dans corps commutatif matrices une matrice hij est dite resp hij que resp que dit encore que est une matrice hessenberg hessenberg une matrice forme est donc une matrice hnn par une sur suivante des matrices hessenberg proposition soit hij une matrice hessenberg par principale dominante ordre par pose suite des mineurs principaux dominants alors relation hkk hik pour voir suffit suivant ligne resp colonne est une matrice resp appliquant matrice xin dont les mineurs principaux dominants sont les des principales dominantes obtient les relations suivantes dites relations hessenberg permettant calculer proche proche les pour sachant que hkk hessenberg elle consiste calculer une matrice ordre dont les aij appartiennent corps forme une matrice algorithmes base hessenberg semblable dont les hij appartiennent algorithme algorithme hessenberg est corps entier une matrice aij sortie variables locales jpiv ipiv iciv piv hij les matrices successives liste des successifs dans initialisations forme hessenberg pour jpiv faire ipiv jpiv iciv ipiv piv hiciv jpiv tant que piv iciv faire iciv iciv piv hiciv jpiv fin tant que piv alors iciv ipiv alors echlin ipiv iciv echange lignes echcol ipiv iciv echange colonnes fin pour iciv faire jpiv ajlin ipiv manipulation lignes ajcol ipiv manipulation colonnes fin fin pour calcul pour faire hmm pour faire fin pour fin pour fin hessenberg pour faire applique pivot gauss aux lignes matrice prenant comme pivots les sousdiagonaux matrice prenant bien soin effectuer les transformations inverses sur les colonnes pour que matrice soient semblables plus consiste tout abord voir position est nul auquel cas faut chercher non nul lui sur colonne tel existe pas passe suivante sinon par une permutation lignes pivot non nul bon endroit position qui revient multiplier gauche matrice par matrice permutation obtenue permutant les lignes les colonnes qui revient matrice multiplie droite par matrice permutation qui est ici son inverse afin que matrice obtenue issue chaque reste semblable matrice pivot non nul alors bon endroit utilisant pivot pour faire des lui dans colonne qui revient multiplier gauche matrice par une matrice type multiplier ensuite droite matrice par matrice obtenue partir changeant signe des sous diagonale est clair que ces qui qui sont sur matrice provenant appelonsla affectent pas les colonnes donnent une matrice semblable algorithmes base ceci donne algorithme hessenberg primo calculer aide une matrice hessenberg semblable matrice secundo calculer qui est aussi celui utilisant les relations hessenberg obtient ainsi algorithme page dans cet algorithme ajlin une manipulation lignes sur matrice ajoute ligne ligne par exemple dans cet exemple montre une entiers forme hessenberg petite matrice coefficients voici liste des lignes forme hessenberg nous avons pas les position lorsque voit des fractions grande taille les coefficients matrice initiale sont par valeur absolue plus grand dans matrice est environ une dans les conditions avec des matrices ordre variant entre donne une taille des hessenberg coefficients type quadratique taille maximum est ordre agit donc ici cas typique une qui applique efficacement directe calcul formel que dans cas corps fini remarque une matrice triangulaire est une matrice hessenberg qui ses valeurs propres dans mais une matrice hessenberg qui ses valeurs propres dans est pas triangulaire semblable une matrice triangulaire comme voit avec matrice nombre phase forme hessenberg est chacune des comporte travail sur les lignes avec divisions multiplications autant additions inverse sur les colonnes comporte multiplications autant additions qui donne nombre total dans qui est asymptotiquement ordre phase qui consiste calculer les des principales dominantes hessenberg effectue par sur partir par nombre resp permettant calculer utilisation des relations hessenberg conduit aux relations suivantes vraies pour pour les pour les algorithmes base dans les deux cas que nombre celui des comme cela donne par sommation autant dans corps phase est donc plus asymptotiquement cinq fois plus nombre que phase calcul matrice proposition algorithme hessenberg calcule les toutes les principales dominantes une matrice sur corps avec moins soit tout remarque que hessenberg gagne par rapport aux calcul elle perd sur aspect essentiel plan pratique celui absence raisonnable taille des coefficients formule permettant exprimer dans pivot gauss chaque coefficient comme quotient deux extraits matrice voir page applique plus dans processus hessenberg effet les transformations subies par les lignes sont ici suivies par des transformations inverses sur les colonnes dispose pas actuellement pour hessenberg pourtant plus rapide temps prend compte que nombre une formule analogue qui permette conclure sur question taille des coefficients cela est par les que nous avons avoir voir exemple chapitre dans cas matrices coefficients entiers utilisant calcul modulaire section page remarque signalons existence une version algorithme hessenberg sur anneau dans interpolation lagrange qui permet garder les coefficients tout long des calculs dans anneau base elle semble bien calcul modulaire sur les anneaux coefficients entiers interpolation lagrange elle calcul une matrice calcul est donc dans une situation calcul des pose pas cela peut cas par exemple lorsque pivot gauss heurte pas des graves simplification fractions lorsque dispose algorithme efficace sans division pour calcul des comme celui suivant une ligne une colonne matrice est creuse consiste appliquer formule interpolation lagrange det xin formule bien connue sont distincts avec restriction suivante les pour doivent non diviseurs dans doit disposer algorithme division exacte par les dans est par exemple cas avec lorsque est nulle finie plus lorsque division exacte par les les entiers les elle est possible est unique par algorithme effet choisit pour formule interpolation det det xin qui exige effectuer des divisions exactes par les entiers algorithmes base fait avec deg suffit appliquer interpolation lagrange qui revient calculer valeur points lieu nombre lors cet algorithme est peu fois celle calcul ordre pour calcul des det choisit utiliser algorithme pivot gauss algorithme jordanbareiss est dans une situation algorithme gauss obtient donc pour interpolation lagrange fait les meilleurs algorithmes sans division dont dispose actuellement pour calculer les passent par calcul qui rend caduque interpolation lagrange avec calcul dans anneau base maple sera ces autres algorithmes sur quelques exemples sur machine voir chapitre verrier variantes cette par astronome verrier repose sur les relations newton entre les sommes newton les dans des sur anneau commutatif verrier une matrice soit dans anneau les entiers sont non diviseurs les seules divisions requises sont des divisions exactes par ces entiers principe verrier consiste calcul des coefficients calcul ses sommes newton sont effet aux traces des puissances comme montre lem page rappelons que anneau base pas besoin puisque les sommes newton peuvent sans recours aux valeurs propres utilisant les page dans les deux cas nous avons que nombre est verrier variantes ceci donne algorithme algorithme algorithme verrier calculer les puissances matrice ainsi que les diagonaux matrice calculer les traces des matrices calculer les coefficients utilisant les fin nombre pour anneau par contexte nous noterons nombre pour multiplication deux matrices ordre trouvera une plus dans notation page lorsqu utilise usuelle multiplication des matrices pour algorithme verrier compte est suivant utilise les utilisent proposition nombre total lors algorithme verrier utilise multiplication usuelle des matrices est des algorithmes algorithme verrier ont par nombreux auteurs avec des concernant nous les dans suite cette section dans chapitre algorithmes base cette par faddeev sominskii souriau frame est une astucieuse algorithme verrier comme dans cas algorithme verrier anneau est tel que division par entier quand elle est possible est unique autrement dit les entiers sont pas des diviseurs par algorithme cette permet calculer une matrice adjointe matrice son inverse existe vecteur propre non nul relatif une valeur propre suppose plus que anneau est posant consiste calculer les coefficients pour utilise pour cela calcul matrice adjointe tel que dans rappelons matrice adjointe est matrice formule page dans adj xin laquelle les matrices sont par les relations page avec utilisant les relations newton page que pour effet partant des suivantes voir page qui des relations pour tout entier les traces des deux membres dans chacune ces matricielles pour obtenir nck mais kck sont les relations newton pour comme nck cause obtient kck notons par ailleurs comme nous avons fait page que det verrier variantes qui montre que det est inversible dans alors inverse par rappelons que adj qui donne sans autre calcul adjointe matrice nous traduisons qui vient par algorithme dans lequel successivement les matrices les matrices algorithme algorithme entier une matrice anneau est avoir algorithme division exacte par les entiers sortie variables locales pour faire fin pour fin calcul vecteurs propres dans cas est est une valeur propre simple une racine simple calcul donnant entre autres matrice adjointe adj xin nous permet obtenir vecteur propre non nul effet donc mais donc algorithmes base ainsi puisque est une racine simple par matrice est pas nulle mais xin donne qui prouve que importe quelle colonne non nulle est donc vecteur propre non nul relatif valeur propre par colonne non nulle matrice par colonne calcul vecteur propre peut faire suivante poser colonne matrice faire pour allant vecteur propre est autre que plus est matrice est pas nulle elle est rang importe quelle colonne non nulle vecteur propre non nul pour valeur propre par contre par valeur propre est non seulement trace mais matrice est nulle page puisque rang matrice est dans cas plus matrice dans cas donne donc aucun vecteur propre non nul montre alors que sont les matrices successives par rapport matrice qui permettent calculer des vecteurs propres non nuls relatifs effet pour est par remarquons que nulle est ordre dans est facile voir quelconque est endomorphisme une valeur propre est par dimension propre correspondant est tant que verrier variantes qui une formule analogue formule leibnitz mais plus simple outre admet comme racine ordre seulement appliquant successivement les pour allant est valeur propre matricielle xin obtient suite xin dans ces par valeur propre tenant compte fait que obtient soit plus petit entier tel que alors car sinon aurait avec qui contredit nous sommes dans anneau fait que matrice est donc toute colonne non nulle est vecteur propre non nul pour valeur propre multiple nombre algorithme consiste calculer pour allant produit matriciel coefficient enfin matrice rappelons par nombre dans anneau base pour multiplication deux matrices algorithmes base ordre algorithme est peu que pour algo verrier gagne outre plus grande avantage est que aussi matrice adjointe particulier calcul matrice inverse elle existe que divisions dans enfin calcul vecteur propre non nul relatif une valeur propre fait moyennant proposition avec calcul matrice son matrice adjointe son inverse quand elle existe ainsi que des propres dimension quand valeur propre correspondante fait pour calcul seul effectue preparata sarwate preparata sarwate est une astucieuse verrier sur remarque simple suivante pour calculer trace produit deux matrices ordre suffit puisque calcul plus dans verrier est celui des traces des puissances successives matrice dont veut calculer posons donc calculons les pour puis les pour calcul consomme dans alors les valeurs pour parcourent intervalle doit calculer outre obtient donc toutes les sommes newton pour peu moins que dans verrier variantes ceci donne algorithme comme des coefficients partir des sommes newton obtient proposition suivante algorithme algorithme preparata sarwate version simple une matrice est anneau les algorithme verrier sortie variables locales calcul des puissances pour pour faire fin pour calcul des puissances arj pour pour faire fin pour calcul des sommes newton pour faire pour faire fin pour fin pour alors fin calcul des coefficients calculer les coefficients utilisant les fin proposition supposons que anneau commutatif satisfasse les algorithme verrier division par entier quand elle est possible est unique par algorithme nombre total lors algorithmes base preparata sarwate utilise multiplication usuelle des matrices est elle est sur partitionnement gas samuelson elle avantage appliquer anneau commutatif arbitraire berkowitz donne une version laquelle nous extrayons une simple efficace elle montre pratique cet algorithme pour les machines parmi les algorithmes les plus performants actuellement pour calcul sans division les test chapitre principe algorithme soit aij une matrice ordre sur anneau commutatif arbitraire aux notations introduites dans section pour tout entier par principale dominante ordre partitionne comme suit matrice est par formule samuelson proposition page que peut sous forme suivante notons pour calculer selon formule samuelson peut effectuer produit supprimer les termes enfin diviser par peut aussi calcul sous forme toep est vecteur colonne des coefficients toep est matrice toeplitz suivante partir toep algorithme algorithme berkowitz principe une matrice sortie pour dans calculer les produits qui donne les les matrices toep calculer produit toep toep toep obtient fin avec obtient algorithme berkowitz informel version dans version plus simple algorithme berkowitz calcul des coefficients matrice toep fait naturellement par utilisation exclusive produits scalaires produits algorithmes base matrices par des vecteurs dans produit effectue droite gauche donc utilise que des produits matrice par vecteur cela donne algorithme algorithme algorithme berkowitz version simple entier une matrice aij sortie variables locales listes longueur variable dans initialisation calcul des des matrices principales dominantes ordre les listes successives dans pour faire air arr pour faire arj pour faire ajk fin pour fin pourp arj pour faire pmin fin pour fin pour fin ainsi sans calculer des puissances matrices commence par calculer puis successivement pour allant produit matrice par vecteur suivi produit scalaire arithrr qui traduit par pour chaque que nombre dans anneau base intervenant dans chistov calcul est est pour multiplication des matrices toeplitz toep commence par multiplier matrice toep gauche par matrice toep pour obtenir vecteur qui est vecteur ainsi suite jusqu toep comme chaque multiplication toep une matrice avec des sur diagonale par vecteur dans calcul fait base proposition total algorithme simple berkowitz dans anneau base chistov principe chistov consiste calculer une matrice ramenant inversion formel det dans anneau des formelles est signe puisque det det xin comme les final calcul sont tous les calculs peuvent faire modulo dans anneau des calcul peut utilisant une multiplication rapide des chapitre mais cela change pas substantiellement global qui reste avec constante asymptotique nous avons pas cette lors nos tests algorithmes base formelles encore peuvent faire anneau des dans ordre sur dans suite nous noterons souvent cet anneau algorithme chistov utilise fait que pour toute matrice ordre diagonale matrice inverse est det det encore det det principale dominante ordre avec convention det ceci permet lorsque est fortement detb appliquant fait matrice xan qui est fortement puisque tous ses mineurs principaux dominants sont des terme constant sont donc inversibles dans anneau obtient det xan mais isomorphisme canonique matrice xar est aussi inversible dans des formelles sur anneau matrices son inverse est matrice donc notant colonne mod par notant mod obtient mod chistov donc dans ainsi est inverse modulo produit modulo terme constant rappelons que calculer est produit par ordre obtient alors algorithme chistov algorithme algorithme chistov principe matrice sortie calculer pour produits qui donne les formule calculer produit des modulo qui donne mod formule inverser modulo obtient prendre ordre obtient multipliant par fin nous maintenant version cet algorithme version dans version plus simple obtient algorithme page suivante maintenant que nombre dans anneau base pour cette version est asymptotiquement ordre algorithmes base algorithme algorithme chistov version simple entier une matrice sortie variables locales initialisation pour faire fin pour pour faire pour faire pour faire fin pour fin pour pour faire fin pour fin pour mod fin reprenons effet les quatre dans algorithme chistov page plus fait calculer successivement les produits pour pour puisque est autre que composante vecteur pour chaque valeur commence par calculer puis pour allant produit matrice par vecteur qui traduit par multiplications pour chaque compris entre que nombre dans calcul est aux suites revient calculer produit plus qui fait base consiste calculer produit ordre des dlog dlog obtenus issue dlog successives ordre cela fait nombre ordre dlog proposition total algorithme chistov dans anneau base nous discussion sur nombre etape etape etape etape etape log tableau version algorithme chistov aux suites dans section nous donnons algorithme calcul une matrice sur des successifs vecteurs base canonique par dans section nous algorithme berlekamp qui permet calculer minimal une suite dans corps lorsqu sait elle une relation ordre les premiers termes suite dans section algorithme wiedemann qui utilise celui berlekamp pour trouver avec une bonne une matrice sur corps fini algorithmes base algorithme frobenius nous donnons ici algorithme qui est sur une description nature pour endomorphisme espace vectoriel comme calcule endomorphisme avec essentiellement nombre que dans pivot gauss qui calcule que sans les que hessenberg hormis cas des corps finis concernant taille des coefficients cas usuel nous aurons besoin page algorithme laquelle nous donnons nom jorbarsol elle calcule relation exprimant colonne fonction des dans une matrice fortement ayant une colonne plus que lignes fin calcul triangulation reste dans anneau relation est coefficients dans une matrice ordre prise hasard par exemple par maple elle endomorphisme note premier vecteur base canonique ses successifs par ceci fournit une matrice voici exemple typique les vecteurs sont matrice est fortement est cas ici concernant taille des coefficients ceux matrice initiale sont par valeur absolue ceux matrice dans aux suites colonne sont par est une des normes section par exemple pour norme frobenius algorithme algorithme jorbarsol une matrice aij fortement anneau est avec algorithme division exacte sortie notant colonne fin calcul reste dans les sont dans variables locales piv den coe den pour faire piv app fortement pour faire coe aip pour faire aij piv aij coe apj den fin pour fin pour den piv fin pour calcul des coefficients pour faire apm pour faire aim aim aip fin pour fin pour fin peut calculer relation qui relit matrice sur base estpalors clairement matrice compagnon page obtient par formule cet algorithme qui calcule algorithmes base tionne lorsque minimal suite dans est dans cas est effet minimal signe pour calculer relation applique jorbarsol elle commence par triangulation matrice cette triangulation fait une change pas les deux colonnes donne les suivantes traite une matrice ordre dont une norme est par coefficient position dans matrice ainsi obtenue est mineur matrice avec min priori comme dans exemple ailleurs plus grand coefficient serait position par qui reste raisonnable tout cas bien meilleur que dans algorithme hessenberg algorithme termine donnant combinaison par calcul successif des coefficients une matrice frobenius ordre est autre nom une matrice compagnon peut comme matrice application multiplication par classe dans quotient sur base canonique algorithme que nous venons ayant pas nom officiel nous appellerons algorithme frobenius est algorithme page suivante nombre avec une matrice ordre algorithme frobenius donne calcul qui est ordre grandeur que pour pivot gauss plus algorithme deux grandes matrice aux suites algorithme algorithme frobenius cas simple une matrice aij anneau est avec algorithme division exacte sortie algorithme fonctionne que premier vecteur base ses successifs par sont variables locales colonne dans colonne pour faire colonne fin pour jorbarsol fin tiplications additions applique algorithme jorbarsol matrice vue colonne cet algorithme utilise les divisions sont toutes exactes ceci donne suivant proposition algorithme frobenius dans cas usuel simple une matrice ordre sur anneau dans lequel les divisions exactes sont explicites demande tout algorithmes base dans anneau plus cet algorithme pratique sur corps fini les algorithmes hessenberg frobenius meilleurs que tous les autres qui correspond fait ils fonctionnent seulement mais passe des matrices coefficients dans algorithme berkowitz devient plus performant car ses sont sur des entiers taille mieux passe des anneaux tels que algorithme berkowitz plus fait utilise pas divisions enfin sur des anneaux non les algorithmes hessenberg frobenius dans leurs variantes avec recherche pivot non nuls fonctionnent plus toute cas difficile triangularisation par blocs que nous maintenant est adaptation pour cas plus difficile qui cependant rarement cette comme celle pour cas usuel celui est son minimal premier vecteur base espace fait partie usage nous savons pas qui attribuer soit une matrice dans notons base canonique identifiera avec endomorphisme ayant pour matrice dans cette base nous allons construire une nouvelle base dans laquelle endomorphisme aura une matrice suffisamment sympathique dont sera facile calculer agit matrice une forme triangulaire par blocs avec des blocs diagonaux ayant forme frobenius voyons qui passe sur exemple exemple dans soit base canonique vecteur matrice ordre rappelons que krylov couple couple dit parfois aussi sous espace cyclique aux suites kra est par suite pour obtenir une base kra dimension nous calculons successivement les vecteurs nous dernier vecteur qui soit pas combinaison ceux qui qui revient nous construisons successivement les matrices nous matrice dont rang est nombre colonnes dans notre cas cela donne suite matrices faut matrice car matrice est rang remarque effet que une base kra est donc couple dim matrice correspondante est passons second vecteur base canonique remarque que est pas dans poursuit construction base avec les matrices puis puis obtenir une matrice dont colonne est combinaison des autres ici est vecteur qui est combinaison ceux qui appartient algorithmes base obtient suite matrices matrice est rang puisque colonne est combinaison des autres peut voir par exemple par pivot gauss qui fournit relation passe ensuite vecteur base canonique remarque que est pas dans construit alors une base kra poursuit donc construction base avec les nouvelles matrices matrice que nous notons est rang est matrice passage base canonique base que nous venons construire dans cette nouvelle base est clair que matrice endomorphisme est une matrice triangulaire par blocs les blocs diagonaux matrices frobenius aux suites peut ailleurs calculant produit matriciel pour obtenir dont celui aussi est produit des des blocs diagonaux frobenius fait matrice par suite les des blocs diagonaux frobenius peuvent partir des relations relation qui exprime vecteur comme combinaison des vecteurs base qui peut obtenue appliquant pivot gauss matrice description algorithme prend puis sauf est avec auquel cas prend entier comme suit les vecteurs sont mais des ceci notre base les tests dont nous avons besoin peuvent obtenus appliquant pivot gauss avec lignes mais sans colonnes sur les matrices successives etc cette fournit aussi relation lorsque entier est atteint notez aussi que nous avons pas besoin calculer les puissances successives matrice mais seulement les successifs vecteur par notre base est exprimant sur base nous obtenons temps matrice sur sous forme algorithmes base une matrice frobenius dont est signe nous cherchons premier vecteur ceci nous fournit vecteur calcul indice donc vecteur peut nouveau obtenu par pivot gauss sans colonnes aux matrices ensuite entier comme suit les vecteurs sont mais des ceci nouveau notre base notre base est exprimant sur base nous obtenons temps matrice sur sous forme une matrice triangulaire par blocs ayant pour blocs diagonaux deux matrices frobenius aux suites dont est signe nous cherchons vecteur parmi les vecteurs restants base canonique nous continuons processus fin compte ayant nombre relativement restreint certainement produits type matrice fois vecteur ayant pivot gauss nombre relativement restreint fois nous avons obtenu une nouvelle base ainsi que matrice sur cette base sous forme une matrice triangulaire par blocs ayant sur diagonale des blocs matrices frobenius matrice est donc produit des des blocs diagonaux qui sont par simple lecture colonne matrice frobenius notons pour terminer est facile sur une telle forme que chacun des vecteurs est par endomorphisme qui fournit une preuve pour preuve suffit ailleurs constater fait pour vecteur car est simplement premier vecteur une base donc importe quel vecteur non nul priori domaine nombre dans cet algorithme nombre est encore son domaine est celui des corps plus celui des anneaux clos que nous avons occasion minimal voir section page effet avec tel anneau les sont automatiquement coefficients dans ensuit que les type jorbarsol que nous utilisons cours algorithme calculent que des dans dans cas une matrice coefficients dans les majorations des coefficients que celles que nous avons dans cas facile plus usuel algorithmes base algorithme donne dans corps les premiers une suite pour laquelle sait existe est calculer minimal suite une telle solution est par algorithme qui donne sortie ainsi que les coefficients est alors obtenu divisant par algorithme utilise les suite des triplets des restes des multiplicateurs successifs dans algorithme euclide pour couple mes posant ces triplets pour tout les relations plus les deux relations facilement par sur processus premier reste disons plus bas que pour obtenir avec posons sup alors peut montrer que par son coefficient dominant est minimal suite par exemple dans cas que les termes compris entre sont nuls constate que est bien suite ceci donne algorithme page suivante dans lequel coefficient dominant cet algorithme est berlekamp mais sous une forme relation avec algorithme euclide invisible est massey qui fait rapprochement pour plus sur relation entre cet algorithme algorithme euclide pourra consulter aux suites algorithme algorithme entier une liste non nulle corps les premiers termes une suite sous elle admet sortie minimal suite variables locales initialisationp boucle tant que deg faire quotient reste division par fin tant que sortie sup deg deg retourner fin wiedemann algorithme wiedemann pour des sur corps est algorithme probabiliste avec divisions qui est sur des suites est efficace dans cas des matrices creuses sur les corps finis utilise fait que minimal une matrice est donc signe alors existe toujours vecteur pour lequel minimal suite est suffit effet comme nous avons dans section corollaire prendre vecteur dehors une finie algorithme wiedemann choisit hasard une forme vecteur puis calcule les premiers termes suite dans enfin minimal cette suite est obtenu par algorithme algorithmes base dans cas par wiedemann corps est fini cardinal une mesure naturelle postulant une des corps minimal matrice est son pour trouver vecteur convenable essais successifs est log compare avec algorithme frobenius voit que doit calculer vecteurs lieu par contre calcul minimal est ensuite beaucoup plus rapide outre dans cas des matrices creuses calcul des vecteurs est rapide notons enfin que les algorithmes frobenius wiedemann peuvent significativement moyen multiplication rapide des multiplication rapide des matrices sections circuits introduction dans chapitre nous introduisons notion fondamentale circuit qui est cadre dans lequel situe analyse des algorithmes dans cet ouvrage peut vue comme une qui cherche analyser les algorithmes qui acceptent mettre sous forme familles circuits dans circuit les instructions branchement sont pas qui semble une limitation assez les algorithmes usuels sont effet ordinairement utilisant des tests que dans beaucoup cas cette limitation apparente est pas une notamment raison des divisions strassen que nous exposons dans section par contre cadre peu strict fourni par les circuits est lui que peut mettre place diviser pour gagner lorsqu envisage les algorithmes des tests signe donc des instructions branchement devient souvent une autre branche est avec des que nous pas ici dans section nous donnons les des circuits leur variante les programmes programs anglais est occasion introduire quelques mesures pour ces algorithmes dans section nous introduisons des divisions selon strassen nous quelques uns des les plus importants qui concernent dans section nous donnons une qui transforme circuits cuit qui calcule une fraction rationnelle circuit taille comparable taille est par plus qui calcule fois fonction toutes ses partielles circuits programmes circuit constitue une naturelle simple les calculs dans anneau arbitraire dans cas algorithme utilise pas instructions branchements uniquement des boucles type pour faire fin pour taille est ces boucles peuvent mises plat obtient programme dont les seules instructions sont des affectations plupart des algorithmes dans chapitre sont type donnent donc lieu lorsque les dimensions des matrices sont des programmes quelques par exemple circuit est par calcul une matrice par algorithme pivot gauss dans cas des matrices fortement pour des matrices taille calcul est alors toujours exactement peut comme une suite affectations peut disposer dessiner moyen graphe plan par exemple pour une matrice donnant nom chaque calcul addition soustraction multi plication division reprenant notation aij introduite section obtient mise plat sous forme programme page dans lequel toutes les affectations une profondeur peuvent principe pour une profondeur les calculs sont faits avec des variables aux calcul comprend profondeur est largeur est circuits programmes programme calcul une matrice ordre par pivot gauss sans recherche pivot une matrice aij coefficients dans corps sortie les coefficients lij dessous diagonale matrice les coefficients uij aij matrice profondeur traitement premier pivot profondeur largeur profondeur largeur profondeur traitement pivot profondeur largeur profondeur largeur profondeur traitement pivot profondeur profondeur profondeur fin plus circuits programme sans division resp avec division avec constantes dans est une partie anneau corps est ensemble variables est entier donnant profondeur variable est identificateur variable une suite instructions affectations des types suivants bien une variable avec bien une constante resp avec les conventions pour mises part les variables qui sont les programmes toutes les variables sont exactement une fois dans programme sont les variables affectation programme les constantes sont comme profondeur nulle note prof prof quelques commentaires sur cette dans cas programme avec divisions peut pour certaines valeurs des variables dans corps souvent ensemble est vide programme peut alors sur corps arbitraire sur anneau arbitraire est sans division naturellement tous les identificateurs doivent distincts les affectations type sont uniquement pour cas respecter certaines contraintes dans une gestion des ordinairement demande que dans une affectation ait prof max prof prof dans une affectation ait prof prof peut aussi demander que dans une affectation ait prof prof texte programme doit normalement quelles sont les variables les sorties mais peut demander que les sorties soient exclusivement les variables profondeur maximum peut demander aussi que toute variable profondeur non maximum soit remarque plus programme peut pour importe quel type structure une fois ont les base dans structure circuits programmes qui peuvent importe quelle par exemple programme correspond structure boole avec les usuels autre exemple dans les anneaux commutatifs peut une notion programme avec introduit tant base les detn comme qui donnent une matrice fonction ses nous utiliserons terminologie suivante concernant les programmes nombre des dans programme est par plusieurs appelle les programme par exemple dans programme qui calcule produit deux matrices prend entier comme dans programme polynomiales maximum dense les sont dans une affectation type sont les profondeur programme est profondeur maximum ses variables affectation prof elle correspond nombre programme taille longueur programme nombre total toutes les les affectations type pour chaque prof nombre durant cette appelle largeur programme plus grand ces nombres max prof lors programme sur anneau sur corps dont les sont les ont priori importe quelle taille tandis que les constantes programme ont une taille une fois pour toutes point vue calcul concret sur des objets est donc souvent est dense lorsque codage donne liste tous les coefficients des dessous dans ordre convenu est creuse lorsque codage donne liste des paires code par exemple peut par son coefficient non nul dans circuits droit estimer que seules importent vraiment les affectations sans scalaires celles type aucun des deux est une constante ainsi que est pas une constante ceci donne lieu aux notions longueur stricte profondeur stricte dans lesquelles seules sont prises compte les affectations sans scalaires une multiplication division sans scalaire dans programme est encore dite essentielle variation sur dans mesure que les additions ainsi que les multiplications divisions par des constantes sont relativement peu pour des raisons plus profondes ordre est par longueur multiplicative programme par profondeur multiplicative qui sont comme longueur profondeur mais tenant compte que des multiplications divisions essentielles par exemple programme une profondeur multiplicative une largeur multiplicative circuit comme graphe peut programme sous forme dessin plan par exemple pour calcul par algorithme pivot gauss avec une matrice fortement peut par dessin circuit page suivante pour une matrice obtiendra circuit profondeur avec nombre portes tenant compte des affectations qui donnent les mineurs principaux dominants matrice veut formaliser genre dessin qui visualise bien situation peut adopter suivante circuit avec divisions resp sans division avec constantes dans est une partie anneau corps est graphe acyclique chaque qui est pas une porte ayant exactement deux circuits programmes division multiplication soustraction pour division soustraction brin entrant gauche premier terme circuit pivot gauss pour une matrice circuit est suivante chaque porte est par triplet est profondeur est nom qui identifie avec ici triplet variable triplet avec par chaque interne chaque porte sortie est par triplet est profondeur est son identificateur resp une enfin dans cas des dans cas circuit est dans anneau non commutatif faut les distinguer gauche droite les deux arcs qui aboutissent correspondant cet les portes sortie correspondant aux calcul sont par une marque distinctive dans leur identificateur fait dans toute suite nous utiliserons circuit programme tout que pour qui est codage nous choisissons toujours codage correspondant programme nous comme circuits synonymes programme circuit dans circuit peut chaque comme dans cas sans division une fraction rationnelle dans cas avec division circuits appelle circuit circuit sans division dont tous les noeuds des qui structure suivante les sont ceux strictement calcul des fait deux phases dans phase une seule effectue des produits dans phase calcule des combinaisons des proposition tout circuit sans division qui calcule une famille peut circuit qui calcule toutes les composantes des sortie circuit obtenu est profondeur multiplicative par rapport circuit initial profondeur par log longueur multiplicative plus par longueur totale plus par preuve chaque noeud circuit initial les somme composantes des composantes sans importance analyse alors calcul qui est fait sur les composantes lorsqu dans circuit original une affectation correspondant une addition obtient sur les composantes plus additions qui peuvent dlog circuits programmes lorsqu dans circuit original une affectation correspondant une multiplication essentielle obtient qui correspond plus multiplications essentielles entre les composantes multiplications scalaires additions soit tout par ailleurs peut ensemble calcul que tous les soient ceux des divisions dans les circuits certains circuits avec division comportent une division par une fraction rationnelle identiquement nulle plus aucun calcul raisonnable implicitement suppose toujours est pas dans cas cas algorithme sans recherche pivot est peu plus subtil correspond circuit avec divisions exactes que lorsqu regarde comme produisant chaque porte corps des fractions rationnelles reste fait toujours dans anneau des les divisions ont toujours pour non une fraction rationnelle les portes sortie circuit sont des les est priori que circuit soit sans division pourra effet dans importe quel anneau dans cas circuit avec divisions dans corps peut que certaines divisions soient impossibles non parce doit diviser par une fraction rationnelle identiquement nulle mais parce que les valeurs des annulent fraction rationnelle est encore une raison qui fait les circuits sans divisions une autre raison est que corps est nulle contient des transcendants addition deux fractions est circuits une affaire bien encombrante addition dans par exemple dans tout abord multiplications une addition suivies une simplification fraction qui calcul pgcd donc les divisions successives algorithme euclide ainsi lorsque les sont dans par exemple que tout calcul reste dans essaie circuit avec divisions dans anneau arbitraire situation est encore peu toute division par diviseur est impossible divise par non diviseur retrouve naturellement dans anneau total des fractions que corps des fractions anneau mais autorisant comme uniquement des non diviseurs dans naturellement les calculs dans nouvel anneau sont nettement plus que ceux dans discussion propos pivot dans profondeur circuit est pertinent plus titre tout abord profondeur quelque sorte temps calcul donne une temps pour chaque dispose suffisamment processeurs entre lesquels les calculs faire ensuite profondeur permet taille des objets lorsque calcul est comme une par exemple dans grosso modo taille double maximum lorsque profondeur augmente dans cas des circuits sans divisions par exemple dans profondeur multiplicative est loin plus importante pour taille des objets tout ceci conduit attacher une importance toute aux circuits sans division faible profondeur des divisions strassen lorqu dispose une utilisant les divisions dans corps des fractions rationnelles pour calculer coefficients dans anneau une technique strassen sur une simple permet toutes les divisions dans cette des divisions strassen principe base est que division par forme peut par produit par formelle condition dans une situation sait peut une partie finie bien des formelles jeu nous allons expliquer cette fondamentale sur exemple calcul une matrice par algorithme pivot gauss sans recherche pivot mis sous forme circuit pour les matrices pour une valeur circuit comme programme dans anneau arbitraire peut limiter par les coefficients matrice anneau total des fractions naturellement obstacle lors une affectation est diviseur cependant des cas cet obstacle pas tout plus simple est celui matrice est toutes les divisions font par cette remarque apparence anodine est cependant des divisions effet suffit faire changement variable circuit est une nouvelle variable dans anneau anneau des ordre coefficients dans que nous noterons souvent quelle que soit matrice coefficients dans prise chaque intervenant dans une division est maintenant type inversible fin calcul donc det dans fait det dans suffit faire pour dans cas serait donc plus astucieux appliquer avec matrice place matrice car obtient ici peu dans cas des divisions est donc proche cette est cependant peu plus simple car dans strassen manipule rapidement des ordre matrice pour terminer notons que est fait assez curieux que les rapides calcul sans division passent toutes par calcul circuits obtenir det fait division dans par inversible que des additions multiplications dans peut effet faire une division puissances croissantes par jusqu ordre peut invoquer formule valable dans suffit prendre dlog ainsi toutes les affectations correspondant circuit dans des additions multiplications encore des additions multiplications dans suivant est maintenant clair des divisions strassen peut tout circuit pourvu soit dans cas suivant point espace des tel que lorsque circuit est point toutes les divisions qui doivent sont par des inversibles anneau base rajoute alors ces leurs inverses ensemble des constantes circuit particulier des divisions est toujours possible anneau base est corps infini les divisions strassen dans circuit partir point est lui appliquer des divisions strassen utilisant comme point lequel circuit est sans divisions nous appellerons point centre des divisions sur corps infini existence centre des divisions pour circuit fait peut toujours ensemble des une famille finie non formellement nuls leur produit est non formellement nul tel une fonction non identiquement nulle sur est nombre variables est infini exemple des divisions donnons titre exemple des divisions pour algorithme pivot gauss dans cas pour des divisions strassen une matrice circuit initial est par programme programme calcul matrice par pivot gauss les coefficients fij matrice dans anneau commutatif arbitraire sortie det calcul est correct situe dans anneau contenant dans lequel tous les forme sont inversibles les programme sont dans notez que les coefficients sont les zfij pour les zfii pour profondeur traitement premier pivot profondeur profondeur profondeur traitement pivot profondeur profondeur profondeur fin pour passer ancien circuit programme nouveau programme page suivante chaque porte yij sauf les portes par les portes yijk dik avec qui donnent les quatre premiers coefficients formelle circuits dans algorithme nous avons pas les portes nulles pour les bas nous avons pas les yijk est inutile pour obtenir final faut remarquer que pour des matrices les formules directes sont bien entendu programme calcul une matrice par pivot gauss des divisions strassen les coefficients fij matrice dans anneau commutatif arbitraire sortie det fait calcule det algorithme fonctionne ligne droite sans aucune restrictive les programme sont dans renommages profondeur traitement premier pivot profondeur profondeur profondeur traitement pivot profondeur profondeur profondeur profondeur fin des divisions strassen des divisions quel est transformation circuit avec division circuit sans division lorsque les sorties sont des les tout abord utilise les algorithmes usuels pour les dans taille circuit sera gros par qui fait reste dans cadre des circuits taille polynomiale par exemple produit deux multiplications additions tandis que division par multiplications autant additions effectue division puissance croissante applique ces constatations dans cas calcul par occasion une matrice par des divisions dans algorithme pivot gauss comme nous avons trouve une taille section circuit que nous avons obtenu pour algorithme comparer notons aussi que multiplication dans par algorithme usuel fait naturellement profondeur log tandis que division par puissance croissante est profondeur log peut pallier dernier utilisant formule qui donne circuit taille log profondeur existe par ailleurs des multiplication rapide pour les les division par dans peuvent par des circuits taille log log log profondeur log voir infra page plus nous utiliserons notation suivante notation pour anneau par contexte nous noterons nombre pour multiplication deux profondeur log strassen obtient alors suivant lorsqu les divisions strassen pour une famille profondeur circuit est par log taille par circuits notons aussi suivant simple concernant les circuits qui des familles second proposition lorsqu les divisions strassen pour une famille longueur multiplicative circuit est preuve lorsqu applique des divisions supposons ait dans anneau des ordre sur ici suppose sans perte que est centre des divisions donc que les sont les obtient pour produit modulo avec avec seule multiplication essentielle puisque sont des constantes calcul analogue pour avec seule multiplication essentielle pourrait avec circuit calculant une famille calcul toutes les partielles une fraction rationnelle nous donnons une pour transformer circuit qui calcule une fraction rationnelle circuit taille comparable taille est par plus qui calcule fois fonction toutes ses partielles circuit est sans division est pour est due baur strassen nous suivons simple constructif que morgenstern fait dans une application importante concerne calcul adjointe une matrice avec voisin celui son effet les coefficients bij matrice adjointe sont par bij det det aji calcul des partielles det est mineur ordre obtenu supprimant ligne colonne matrice nous montrons par sur longueur programme qui calcule fonction supposons donc par exemple soit par programme sans division longueur peut les variables programme variable des types suivants avec une constante aussi est par programme extrait longueur par les partielles peuvent par programme longueur alors les formules qui permettent calculer les partielles partir celles dans les cas est cas qui consomme plus instructions nouvelles tout instructions pour calculer pour faut par ailleurs rajouter instruction qui permet calculer fonction des ceci nous permet donc construire partir programme pour calculer ses partielles programme une longueur par par ailleurs initialisation est cas programme avec divisions traite aboutit majoration notions introduction chapitre est aux notions binaire une part directement issue travail des ordinateurs autre part relation avec nombre par algorithme les deux sections sont binaire constituent une rapide guise rappels les trois sections des familles circuits elles servent donc base travail pour les calculs dans tout reste ouvrage dans section nous introduisons les classes importantes nous discutons rapport entre nombre binaire temps effectivement lorsqu travaille avec des les anneau convenablement ceci nous conduit notion famille uniforme circuits aux classes qui sont dans section enfin dans section nous discutons machine les prams correspondant aux circuits assez proche pratique des architectures machines turing machines direct nous donnons ici quelques indications succinctes sur les calcul algorithmique dans lesquels est prise compte taille des objets notions manipuler par exemple temps pour additionner deux entiers base est manifestement ordre grandeur que place par ces deux entiers tandis que algorithme usuel pour multiplication deux entiers tailles utilise temps ordre grandeur que lorsque dans les des logiciens ont termes est calcul algorithmique ils ont abouti des assez quant forme mais identiques quant fond tous les ont abouti notion fonction calculable vers machine turing abstraite cependant est alan turing qui conviction par son par son vraiment est parti calcul doit pouvoir par une machine qui instar calculateur humain dispose une feuille papier crayon selon une suite bien une fois pour toutes plan travail laissant place aucune est sur notion une telle doit suffisamment simple pour consommer une fixe temps imagine donc que machine dispose alphabet fini une fois pour toutes une consiste lire effacer une lettre endroit feuille papier doit cases par exemple prend papier encore vers une case voisine sur feuille papier naturellement autorise nombre fini lettres distinctes dans premier turing utilise une feuille papier une simple succession cases sur une seule ligne potentiellement infinie bande machine turing par suite plus naturel utiliser pour une machine turing qui utilise plusieurs bandes pour son travail quant crayon muni une gomme est par est convenu appeler une lecture qui long bande une lecture pour chacune des bandes certaines bandes doivent contenir algorithme convenablement tandis que les autres sont vides lorsque machine lit serait plus correct mais plus lourd parler une machines turing machines direct endroit convenu une lecture est capable case lue est vide alors une lettre elle est pas vide lire lettre qui trouve effacer pour plus nous renvoyons ouvrage tur sont traduits les articles originaux turing ainsi aux ouvrages ste bdg fonctionnement abstrait machine turing fait candidat naturel non seulement pour les questions mais pour les questions particulier pour question temps espace algorithme une fois algorithme traduit dans machine turing temps est simplement par nombre qui sont avant aboutir espace est par nombre cases programmes peut donner machine turing termes programmes sans doute plus parlant pour quiconque programme informatique des programmes nature simple ils sont utilisant des variables les entiers sont binaire programme est une suite finie instructions des types suivants affectations mod div branchements direct aller instruction conditionnel aller instruction conditionnel entier aller instruction les variables sont toutes sauf celles qui les programme puisque les entiers sont binaire voit que chaque affectation branchement peut correspondre travail consommant temps une des variables notions temps est donc raisonnablement comme nombre instructions avant aboutir machines direct comme tout abstrait machine turing est une point plus contestable est implicite selon laquelle une est une autre quel que soit bande dans version machine des variables dans version programme informatique une telle conception heurte des limitations physiques elle est tout cas pas conforme qui passe dans les ordinateurs actuels alan turing participa aventure des premiers ordinateurs les ordinateurs ont une conception globale qui sensiblement machine turing abstraite les sont pas elles sont comme dans image crayon qui sur feuille papier mais elles sont depuis disque dur par exemple vers centre elles sont vers microprocesseur avant vers ces transferts permanents prennent autant plus temps que les sont plus que espace leur stockage est plus grand ceci lieu autre calcul mad des machines direct ram version anglaise avec nombreuses variantes dans mad doit une potentielle registres correspondant stockage des aux cases une bande machine turing serait logique mais est pas option choisie que chaque registre contient une information dont taille est une fois pour toutes pour traiter registre dont adresse est entier que transfert vers centrale requiert temps taille binaire entier dans turing temps correspondant peut nul mais aussi beaucoup plus grand que log selon position des lecture sur chaque bande fin compte selon algorithme selon mad choisi les temps obtenus dans machine turing plusieurs bandes dans les mad pour une taille sont soumis des majorations respectives type suivant voir exemple dans ste chapitre sections machines turing machines direct signalons terme accumulateur qui dans mad microprocesseur espace travail proprement dit nous terminons cette section avec commentaire une plus espace travail dans les mad dans nous avons espace comme nombre total cases effectivement cours algorithme fait veut espace travail proprement dit par algorithme est judicieux une distinction entre espace aux une part espace travail proprement dit autre part convient dans cas que les bandes contenant les sont lecture uniquement elles sont lues une seule passe les bandes contenant les sorties sont uniquement elles sont une seule passe par exemple lorsqu veut faire preuve par pour produit sont comme des base suffit lire une seule passe les aucun stockage des est donne fin oui non sans avoir aucun espace pour travail proprement dit cela sous forme programme informatique type que nous avons cela signifierait que les variables travail sont toutes que les variables les sont seulement lecture elles peuvent que via les affectations les variables sortie sont seulement elles peuvent que via les affectations ainsi certains algorithmes utilisent espace travail nul dans cas optimal nettement taille des pour les algorithmes est par ceux qui utilisent aucun espace travail une part par ceux qui utilisent espace travail par rapport taille veut additionner deux entiers suffit les lire une seule passe fur mesure sur bande sortie cependant les sortie sont pas dans sens effet pour pouvoir des algorithmes convention naturelle est que lecture sur chaque doit droite lecture sur chaque sortie doit fin droite sortie notions autre part enfin par ceux qui utilisent espace travail ordre grandeur log est une constante est taille appelle ces derniers des algorithmes logspace binaire les classes calculs faisables grande abondance des calcul consensus fini par sur est calcul faisable dit calcul est faisable encore est dans classe algorithme qui dans les mad temps polynomial par rapport taille plus dit rien concernant tel calcul celui des par exemple mais dit quelque chose concernant calcul correspondant des tailles variables tout cas arbitrairement grandes celui par exemple demande que pour certain coefficients positifs nul pour toute taille algorithme donne temps par les algorithmes logspace sont dans classe ils sont juste titre comme bien meilleurs que les algorithmes qui travailleraient temps espace polynomial voit que notion algorithme classe est une notion asymptotique qui peut assez des calculs algorithme ayant temps calcul correspond pratique quelque chose infaisable tandis que son temps calcul est exponentiel par sup reste facile pour toutes les envisageables alors est pas dans classe nombreux auteurs distinguent les faisables les sont des entiers par des entiers mais sortie est type donc par dans des fonctions faisables les sorties sont des entiers ils symbole pour les faisables classe des fonctions faisables calculables temps polynomial est alors fait une fonction est faisable seulement une part taille sortie est polynomialement fonction taille autre part est dans classe nous introduirons donc pas deux notations distinctes nous ferons confiance contexte pour lever les binaire les classes citons des base qui ont dans une solution algorithmique satisfaisante qui les mettait dans classe bien avant elle des par chinoise pivot occident pivot gauss donne algorithme classe lorsque les coefficients les inconnues sont des nombres rationnels calcul nombre racines par sturm qui avait lors pour son fournit algorithme temps polynomial lorsque les coefficients sont des nombres rationnels calcul une matrice par leverrier est autre exemple calcul aspect algorithmique pour calcul automatique certaines aires par exemple qui frappa les contemporains leibniz newton qui est devenu aujourd hui une des branches calcul formel dont les solutions sont faciles tester conjecture est apparue dans les cook elle correspond intuitive suivante des dont les solutions sont faciles tester mais qui sont difficiles pourrait dire priori que plupart des cherche correspondent paradigme est remarquable que cette intuitive ait recevoir une forme avec des algorithmes tard venue dans monde des conjectures conjecture aujourd hui comme une des plus importantes une dont signification est plus profonde elle toutes les tentatives venir bout beaucoup experts pensent dispose pas aujourd hui des concepts solution alors elle quasiment force une nous allons donner quelques commentaires relativement informels ils sont pour aborder dans les chapitres analogue conjecture binaire nous recommandons encore sur sujet les ouvrages bdg ste comme exemple dont les solutions sont faciles tester mais qui sont difficiles nous allons les programmation tel est par une matrice type vecteur colonne type coefficients une solution est vecteur colonne type notions tel que vecteur ait toutes ses pour faire dont nature algorithmique est bien nous nous limitons aux matrices coefficients entiers binaire quant aux solutions nous avons choix nous demandons des solutions nombres rationnels nous parlons programmation rationnels nous demandons des solutions nombres entiers nous parlons programmation entiers pour chacun ces deux une solution est facile tester algorithme qui donne une solution rapide existe une pour programmation rationnels mis point dans les est performant est encore aujourd hui est agorithme dantzig est que pour certaines matrices algorithme mauvais comportement son temps calcul peut devenir exponentiel par rapport taille dans les autres algorithmes qui dans plupart des cas sont nettement plus lents que celui dantzig mais qui tournent temps polynomial pour importe quelles matrices ouvrage sch depuis sait donc que programmation rationnels est dans classe par contre pour qui concerne programmation entiers est toujours pas capable par algorithme classe aux solutions petite taille fait pense sera tout jamais incapable car une dans autre sens signifierait que conjecture est fausse pour expliquer comment est classe nous essayons examiner avec peu recul que signifierait savoir dont sait tester facilement les solutions nous par remarquer que pour bien poser question faut savoir donner sous une forme qui puisse prise comme programme informatique une machine turing peut donc toujours que une suite infinie est justement forme binaire les entiers qui coderaient pas correctement une instance notre doivent pouvoir faciles quant aux solutions elles doivent pouvoir comme telles sortie programme informatique nous supposons donc sans perte programmation est sous forme optimisation nous ici une version plus facile discuter pour notre propos actuel binaire les classes que solution est elle aussi par entier maintenant fonction qui est comme suit est code une solution sinon supposer sait tester facilement les solutions notre famille peut raisonnablement comme signifiant que fonction est dans classe tandis que supposer que est difficile peut raisonnablement comme signifiant que question pas dans classe maintenant nous devons apporter une restriction peut que soit difficile pour une trop bonne raison savoir que les solutions sont taille trop grande plus exactement que taille toute solution croisse trop vite par rapport celle nous notons dans suite cette section taille entier naturel longueur son binaire nous pouvons maintenant est dans classe est une question type suivant une solution taille raisonnable pour telle famille dont les solutions sont faciles tester plus une famille dans est dite dans classe solution revient une question type sont deux entiers positifs est dans classe autrement dit pose sup fonction est dans classe alors fonction est dans classe peut ailleurs supposer sans perte que est mis pour non raison est suivante fonction pourrait temps polynomial par une machine dont fontionnement serait non plus utilisant nos programmes informatiques admettrait des intructions branchement non aller instruction selon humeur moment notions programme peut alors aboutir plusieurs selon chemin choisi lors son programme serait calculer pour une plus grande des valeurs peut sortie acronyme vaut alors pour calculable temps polynomial par une machine fonctionnement non notez que avait que personne croit pourrait non seulement calculer dans exemple fonction temps polynomial mais dans cas une positive trouver une solution pour temps polynomial effet pourrait calculer tel par dichotomie temps polynomial posant nombre polynomial fois question qui serait temps polynomial sur les rerait avec certains qui peuvent sembler priori dans classe sont dans classe lorsque quelqu algorithme rapide pour les des spectaculaires ont fin solution temps polynomial des coefficients inconnues celle des programmation rationnels petits vecteurs dans qui conduit notamment factorisation temps polynomial des sur cook que certains classe sont universels pour entre eux est dans classe alors tel est dit complet par exemple programmation entiers est complet limite priori taille des solutions par entier fixe nous pouvons expliquer informellement pourquoi existe des complets ordinateur qui serait soumis aucune limitation physique temps espace serait une machine universelle sens est capable importe quel programme lui soumet faisant abstraction des limitations physiques des premiers alan turing existence une machine turing universelle une importante existence une machine turing universelle est via processus diagonal cantor existence bien pour les machines turing mais qui pourront par aucun type machine turing ensemble des codes turing fonctions calculables binaire les classes vers sens des machines turing est pas calculable sens des machines turing existence complets est nature similaire introduisons notation pour code dans entiers termes programmes informatiques existence une machine turing universelle signifie sait programme universel sens remplit contrat suivant prend entiers binaires entier est texte programme binaire est code pour liste des pour est nombre pendant lequel que soit donne sortie une description codage binaire qui exacte trouve machine qui programme calcul sur valeur chacune des variables programme une part instruction cours autre part par temps est demande que pour ait nous supposons aussi sans perte que les variables sortie sont seulement sont que via les affectations type est pas difficile programme universel naturelle calcule fonction universelle temps polynomial comme obtient quelque chose qui compris comme une dans classe tous les programmes dans classe sur une taille polynomialement expliquons nous tout abord notons fonction dans classe qui donne variable sortie plusieurs sorties codage naturel sorte que les fonctions codage celles sont dans classe suppose aussi sans perte que entier sert compteur est binaire par mot fois lettre ici est prendre pour entier parce veut que fonction universelle soit calculable temps polynomial par rapport taille ses notions entre elles soit maintenant une fonction dans classe qui alors fonction sup dans classe que est dans classe universel existe entier deux entiers tels que avec par ailleurs avec des entiers des entiers binaires inf sinon avec est naturellement une fonction dans classe pour laquelle partir laquelle peut sup qui est dans classe maintenant est clair que pose alors fonction est dans classe ceci montre universel fonction sens nous souhaitions cette preuve peut donner des sur dont temps polynomial celui particulier peut fabriquer une variante est dans classe logspace binaire les classes comptage est ensemble fini nous noterons nombre lorsqu une famille dont les solutions sont faciles tester taille polynomialement peut poser non seulement question savoir une solution existe mais combien solutions existent est une fonction dans classe qui alors fonction compte nombre solutions pour question par priori cette fonction est plus difficile calculer que fonction par sup qui est dans classe taille est polynomialement fonction celle les fonctions obtenues cette une nouvelle classe les fonctions comptage pour les dont les solutions sont faciles tester que note prononcer cette classe introduite par valiant dans veut que classe soit une classe une classe fonctions comme classe des type effet puisque est facile calculer par dichotomie temps polynomial fonction partir des tests conjecture que les deux inclusions sont strictes existe des complets existe des fonctions fait que nous avons dans cas fonctionne aussi pour les fonctions comptage effet alors avec fonction que obtient notions binaire des circuits taille fait nombre profondeur circuit programme sont les deux qui mesurent appelle circuit programme sont des fonctions que nous avons les circuit comme souvent asymptotique des algorithmes leur comportement quand ces tendent vers infini nous allons utiliser les notations classiques suivante notation deux fontions dans dit que existe une constante telle que pour tout existe tel que dit dans cas que est ordre que remarquons que pour montrer que suffit trouver une constante entier tels que pour tout nous appellerons une telle constante une constante asymptotique dans grand dans suite chaque fois que sera possible nous nous appliquerons faire constante asymptotique dans grand dans des algorithmes entier sera parfois notation une famille circuits algorithme est dans classe pour dire correspond une famille circuits taille profondeur par exemple algorithme pivot gauss tel dans section est binaire algorithme est dit optimal lorsqu pas algorithme asymptotiquement plus performant point vue taille des dont ordre asymptotique exact nombre pour comme par exemple une sur anneau commutatif quelconque autres par contre comme celui multiplication des matrices sont des dont ignore exacte cause entre les bornes asymptotiques que faut remarquer que grand notation introduite majeur cacher constante asymptotique qui permet elle pourtant une importance pratique puisque deux algorithmes permettant par exemple respectivement avec sont tels que second une asymptotique nettement meilleure que premier peut arriver soit aussi optimal alors que second asymptotiquement moins performant reste plus rapide tant que nombre effectuer pas atteint borne astronomique binaire raconte que inventeur jeu demanda comme grain sur case deux sur quatre sur ainsi suite jusqu cela fait priori circuit profondeur mais pour calculer circuit taille profondeur suffit porte avec algorithme horner est optimal pour page multiplication des matrices est notions fin circuit taille sur permet calculer ceci montre clairement une entre taille circuit celle des objets peut produire lorsqu sur des entiers binaire binaire circuit une famille circuits est par calcul produit lorsqu prend ses dans anneau avec codage exemple plus simple plus important est anneau des entiers binaire naturellement accepte coder entier par circuit sans division ayant pour seules des constantes priori par exemple note zpreval anneau des entiers ainsi voit que circuit sans division dans zpreval est temps suffit mettre les circuits bout bout changeant seulement certaines profondeurs certains identificateurs avec zpreval est alors test signe division euclidienne des circuits avec divisions exactes est donc crucial fois anneau codage choisi pour cet anneau lorsqu veut parler binaire circuit signalons sujet notion usuelle peut souvent avantageusement par notion profondeur programme qui lui correspond agit sujet recherche actif prometteur exemple binaire algorithme pivot gauss elle est par nombre pour algorithme avec des sous forme suites bits cette importante corps codage choisi pour les corps est corps fini binaire est proportionnelle est bon cas pour algorithme dans cadre calculs qui constitue aujourd hui une partie importante travail des ordinateurs algorithme binaire est avec des nombres virgule flottante par des suites bits longueur fixe binaire est nouveau proportionnelle mais naturellement travaille pas vraiment avec les corps des garantir les avec une analyse matricielle remplit des rayons entiers dans cet ouvrage nous prenons compte que les calculs exacts infinie parfois nous ferons autre allusion aux aspects proprement des algorithmes que nous commenterons voir cependant page pivot gauss dans corps des rationnels quelques surprises les sont des nombres entiers binaire usuelle doit passer corps des fractions rationnel est alors par couple entiers avec signe strictement positif avec les rationnels ainsi qui est codage binaire naturel est alors devant alternative suivante simplifier les nouvelles matrice elles sont jamais simplifier solution est car les fractions successives voient les tailles leur exponentielle solution quoique moins est car elle implique des calculs pgcd formule dans permet exprimer comme quotient deux extraits matrice elle cas des permutations lignes colonnes sont donc garantie que toutes les fractions qui sont cours algorithme restent taille raisonnable log part une matrice coefficients entiers par taille binaire par valeur absolue nombre dans doit donc par facteur pour tenir compte calcul simplification des fractions binaire elle une majoration fort des facteurs logarithmiques utilise les algorithmes usuels pour multiplication division deux entiers avec corps des fractions pivot gauss heurte type mais nettement car les calculs pgcd surtout plusieurs variables sont notions situations dans lesquelles binaire circuit est rapport avec nous signalerons ici trois situations type premier cas est celui une famille circuits dans anneau avec codage pour lequel les produisent des objets taille bien fait structure circuit proposition une famille circuits taille profondeur est nombre circuit supposons outre que production circuit temps soit enfin anneau dans codage pour lequel les sont temps polynomial avec taille des objets alors production puis circuit dans mad temps par est taille liste des particulier log pour des constantes convenables alors algorithme correspondant famille est globalement temps polynomial preuve dans mad peut utiliser registre distinct pour chacune des variables programme taille tous les est par puisqu elle double maximum quand profondeur augmente une les transferts entre les registres travail accumulateur temps ordre log qui est devant estimation temps des proprement dites remarque dans des machines turing obtient les majorations pour par contre lorsque varie pose gestion nombre non priori variables travail alors une telle machine quant elle nombre priori bandes travail les transferts entre une part bande est liste des contenus des variables travail autre part les bandes sont les prennent normalement temps ordre log binaire car bande stockage doit relue pour chacune des taille est seulement par log ensuit que majoration temps obtenue peut parfois peu moins bonne que celle pour mad nombreuses variantes situation peuvent par exemple pour dans est seulement profondeur multiplicative qui doit log pour ait bon taille des objets produits donc ensemble calcul algorithme est dit bien lorsqu correspond une famille circuits dont taille est optimale dont profondeur est log pour certain exposant taille est polynomiale profondeur est alors polylogarithmique log fait nous utilisons dans cet ouvrage terme bien avec sens peu plus pour mot optimal pour les algorithmes temps polynomial nous demandons seulement que qui concerne taille exposant soit pas loin celui meilleur algorithme connu profondeur elle polylogarithmique est sens que nous que les algorithmes csanky chistov berkowitz sont bien cas est celui une famille circuits dont profondeur est pas logarithmique pour laquelle argument nature qui permet mieux majorer taille des objets que argument profondeur est par exemple cas algorithme pivot gauss par des divisions strassen algorithme dans cas algorithme bien comme celui berkowitz dans les majorations taille obtenues par argument direct sont meilleures que celles obtenues par argument profondeur signalons calcul majoration simple qui permet souvent satisfaisant taille des objets dans cas dans anneau style matn dense voir note page les entiers binaire aij est une matrice dans cet anneau note maximum une aij log ceci anneau des matrices coefficients dans notions aijhk est coefficient dans aij alors taille qui est par les formules suivantes sont faciles max max dab ceci signifie que type anneau comporte comme pour tous les calculs majoration taille des objets produits lors circuit particulier taille circuit est polynomiale profondeur multiplicative est logarithmique alors taille des objets est polynomialement plupart des algorithmes que nous examinons dans cet ouvrage ont pour type anneau que nous venons signaler une majoration polynomiale taille des objets signalons revanche mauvais comportement algorithme hessenberg pour taille des objets cas est celui une famille circuits sans divisions dans cadre calcul bien lors circuit les sont des nombres dyadiques comme des nombres pris avec une toutes les portes circuit sont avec une calcul majoration erreur est pour que calcul ait sens calcul dit une chose genre suivant sachant que vous les sorties avec une absolue digits virgule que les sont prises sur intervalle par alors vous devez circuit effectuant tous les calculs avec particulier les doivent prises avec cette par exemple pourra imaginer une famille circuits sens fonction sur intervalle circuit doit permettre cette fonction sur intervalle avec tous les calculs avec une famille peut produite temps polynomial requise peut par alors fonction ainsi est dite calculable temps polynomial kko cela signifie que cette fonction peut avec sur importe quel dans intervalle par temps qui polynomialement agit donc analyse parfaitement type algorithmes est phase sur machine familles uniformes circuits cela peut comme une des importantes par calcul formel familles uniformes circuits les algorithmes calcul usuels ont nombre sorties qui plusieurs entiers comme par exemple multiplication deux matrices pour fixer les tailles des deux matrices produit une liste matrices une liste entiers pour une matrice nous avons ces des comme nous avons dit est pas seulement taille profondeur circuit fonction des qui sont importantes mais aussi son production pour calculer une matrice coefficients entiers dans situation plus possible par exemple doit abord produire texte programme correspondant circuit envisage ensuite programme sur liste voulue circuit est faible profondeur faible taille mais que production programme correspondant vite lorsque augmente peut satisfait est raison pour laquelle introduit notion famille uniforme circuits dit une famille circuits par les est uniforme lorsque production circuit tant que texte programme raisonnable des une notion consiste demander que production circuit soit dans classe temps polynomial une notion plus forte consiste demander soit dans classe logspace que espace travail production circuit soit logarithmique ces notions sont relativement satisfaisantes mais elles mieux dans chaque cas concret est clair une famille circuits qui aurait une profondeur log une taille production serait pas bon cru pour dans sur sujet silence discret fait tout monde notions apparemment est bien clair que production circuit pas ordre grandeur bien taille nous nous contenterons confirmer cette impression par cas multiplication rapide des matrices strassen nous renvoyons pour cette chapitre section classes pour les notions taille profondeur des familles circuits sans exiger que ces familles soient uniformes binaire les les sorties algorithme sont des mots sur alphabet par exemple alphabet des entiers binaire est alors naturel utiliser les familles circuits pour les notions taille profondeur algorithme dans circuit chaque est les portes sont trois sortes avec deux seul pour chaque longueur algorithme comme une suite finie circuit correspondant doit calculer sortie mais sans famille aboutirait des intuitifs puisque toute fonction vers telle que que longueur est par une famille non uniforme circuits taille profondeur vrai dire circuit sert rien aucune est une telle fonction peut pas calculable pour entier naturel note classe toutes les fonctions qui peuvent par une famille uniforme circuits dans logk est entier positif par circuit nombre portes polynomialement est prise ici sens plus fort que nous avons cette section est logspace pour une famille circuits existence une machine turing qui pour donne sortie codage circuit utilisant espace log pose agit acronyme pour nick class nom nicholas peppinger qui cette classification des algorithmes machines direct alors mais inclusion dans autre sens des deux classes est ouvert est que inclusion est stricte peut des notions analogues bcs serait alors distinguer dans les notations classe sens celle outre peut exiger pas exiger famille circuits peut aussi vouloir indiquer sur quel anneau commutatif travaille dans cadre cet ouvrage nous pas multiplier les notations nous garderons notation pour parler des familles uniformes circuits logk est somme des circuit est entier positif nous demandons outre que tous les aux noeuds circuit soit par enfin nous prendrons sens plus modeste famille des circuits doit seulement construite temps polynomial seule vraie preuve que nous faisons est ailleurs celle construction que nous donnons est pas logspace par contre notre est plus qui concerne temps construction circuit plupart des autres algorithmes dans cet ouvrage ont une preuve plus simple alors analogue celle pour dans cas des familles non uniformes qui ont intensivemnt par valiant nous utiliserons les notations voir chapitres machines direct nous dans cette section quelques machines susceptibles des familles circuits nous pas cependant les questions programmation pour les machines principal objet conception algorithmes est temps calcul permettant moyennant nombre suffisant mais raisonnable processeurs notions une des calculs sur ordinateur unique nous devons faire choix algorithmique machine direct random access machine ram est une abstraction ordinateur von neumann nous ici analogue algorithmique celui des machines direct parallel random access machines pram qui constitue standard une machine direct pram est une machine virtuelle abstrait nombre processeurs partageant une commune globale nombre registres auxquels ils ont pour lire pour des des calcul chaque processeur propre locale taille inaccessible aux autres processeurs elle lui permet une seule temps calcul comme suite instructions suivantes chercher ses dans globale effectuer une des division quand elle est permise sur ces dans registre commune globale faisant abstraction tous les globale communication interconnexion entre processeurs une temps calcul dans tel abstrait correspond cette par certain nombre processeurs les processeurs actifs autres processeurs pouvant rester inactifs des par ensemble des processeurs actifs est une que les sont disponibles processus quand chaque processeur puise ses dans globale fin une quand chaque processeur actif son calcul calcul aux contraintes entre dans algorithme nombre processeurs ainsi que nombre registres sont habituellement fonctions taille traiter machines direct existe plusieurs variantes pram selon mode globale concurrent exclusif sera une lecture dans registre est permise seul processeur fois une pramcrew lecture est concurrente exclusive une pramercw lecture est exclusive concurrente une pramcrcw lecture dans registre globale sont permises pour plusieurs processeurs fois dans les deux derniers cas faut que deux processeurs mettent dans registre des qui donne autres variantes machines pram selon mode gestion concurrence mode prioritaire arbitraire existe une entre ces variantes moins puissante erew plus puissante crcw prioritaire ces pram sont fait pour classe des qui nous dans sens ils autre par des techniques simulation nous utiliserons pour description analyse des algorithmes qui nous concernent variante dont conception est proche notion circuit programme puisqu une peut par circuit dans lequel les les chacun des autres internes aussi bien processeur actif que contenu registre globale correspondant cette enfin profondeur circuit programme telle que nous avons section correspond nombre calcul plusieurs permettent mesurer que nous appellerons algorithme ces sont temps qui est nombre calcul qui correspond temps algorithme est aussi que appelle profondeur algorithme erew comme exclusive read exclusive write crew comme concurrent read exclusive write etc notions nombre processeurs nombre maximum processeurs actifs durant une quelconque calcul sachant processeur peut durant une plusieurs successives temps algorithme nombre qui interviennent dans calcul qui revient temps disposait que seul processeur encore somme des nombres processeurs actifs durant toutes les calcul est que appelle aussi taille parfois surface calcul algorithme travail potentiel surface totale algorithme qui est produit nombre processeurs par nombre calcul temps tous les processeurs actifs durant toutes les calcul peut parfaite analogie des jusqu ici entre circuit programme par tableau suivant temps temps nombre processeurs programme evaluation profondeur longueur largeur circuit profondeur taille largeur tableau nombre processeurs dans une pram est largeur dans programme temps dans une pram est analogue longueur taille programme temps correspond profondeur algorithme est alors comme rapport entre temps travail potentiel cet algorithme encore rapport entre surface calcul surface totale algorithme pour revenir exemple algorithme pivot gauss voir page qui cet algorithme peut par tableau suivant rectangle gauche dont les lignes correspondent aux successives calcul les colonnes aux processeurs ceux une croix sont les processeurs actifs cours une machines direct processeurs etape etape etape etape etape etape etape processeurs etape etape etape etape etape etape etape etape etape algorithme peut par une pram deux processeurs rectangle droite lieu quatre moyennant une augmentation nombre ralentissement des calculs avec lieu pour chaque rectangle surface surface calcul temps surface totale travail potentiel longueur largeur rectangle respectivement temps nombre processeurs cet algorithme passe quand est par pram initiale avec pram environ nous introduisons maintenant notation classique suivante pour qui sera dans suite notation note pram classe des taille par algorithme avec processeurs tout algorithme qui sur une telle machine permet cette classe est par abus langage comme appartenant cette classe dira que est algorithme pram algorithme par une pramcrew est une notion relative partir temps algorithme choisi comme algorithme agit qui nous concerne pour algorithme multiplication des matrices ordre par notions une log avec processeurs peut supposer algorithme est dit par rapport algorithme temps existe tels que soit dans pram logm logk nous verrons plus loin des exemples algorithmes comme celui inversion des matrices fortement page pour lesquels prend comme algorithme celui multiplication usuelle resp rapide des matrices par circuit log resp log pour principe brent principe brent affirme peut intelligemment travail entre les calcul afin diminuer significative proportion des processeurs inactifs lem proposition algorithme dont temps sur une pram est dont temps est peut sur une pram utilisant processeurs calcul sans changer temps preuve supposons effet calcul peut raison base par directement calcul sur une pram pour nombre processeurs sera alors max prenant processeurs lieu avec pour cas proposition est triviale peut calcul faisant effectuer les base par les processeurs dmi comme dmi bmi nombre total avec une pram processeurs pas bmi machines direct principe est utile lorsque temps est quand devant temps algorithme puisqu peut pratiquement diviser nombre processeurs par doublant simplement temps algorithme prend par exemple algorithme logk est positif entier naturel quelconque donne par application principe brent algorithme pram logk logk cela permet dans pratique prix ralentissement relatif multiplication temps calcul par une petite constante algorithme diminuant temps des processeurs par une rapport entre travail potentiel surface totale travail surface calcul ceci par une des calculs dans sens une meilleure des processeurs entre les nous suivante qui relie des circuits celle des pram proposition algorithme est algorithme pram inversement tout algorithme dans pram est algorithme remarque dire algorithme est par rapport algorithme temps revient dire est logm logk pour couple diviser pour gagner introduction dans chapitre nous une approche bien connue sous nom divide conquer que peut traduire par diviser pour auquel nous concept diviser pour gagner parce que mieux nous calcul avoir principe nous utilisons pour deux classiques algorithmique que nous serons utiliser dans suite calcul produit calcul des parallel prefix algorithm pour dernier nous plus algorithme classique une due ladner fischer pour obtenir une famille circuits taille profondeur logarithmique est meilleur connu heure actuelle nous appliquerons diviser pour gagner plusieurs autres occasions dans les chapitres suivants notamment pour les multiplications rapides matrices pour rapide sur les corps principe approche diviser pour gagner applique pour une famille elle consiste diviser style avec auxquels peut appliquer algorithme que celui qui permet initial pour ensuite final partir des solutions des diviser pour gagner entier nombre des qui seront lorsqu pas appelle algorithme ainsi obtenu une telle approche conception algorithmes permet souvent apporter une solution efficace dans lequel les sont des copies initial avec sensiblement par exemple est entier cette nous permet analyser algorithme elle produit calculer des majorants asymptotiques taille profondeur circuit correspondant avec une estimation constante grand effet supposons que traiter est peut avec suceptibles remarquons tout suite que est entier est pourquoi dans cas cet algorithme resp taille resp profondeur circuit correspondant calcule par sur aide des formules suivantes resp taille resp profondeur des circuits correspondant double partitionnement solution partir des solutions partielles absence facteur dans exprimant profondeur est due fait que les taille sont avec des circuits profondeur maximum donne admet pour solution dans cas est une constante sachant que profondeur pas reste devient principe nous rappelons les dans qui suit proposition soient variable nous supposons que traiter est avec peut type avec suceptibles nous notons resp taille resp profondeur des circuits correspondant double partitionnement solution partir des solutions partielles enfin sont taille profondeur circuit qui traite alors taille profondeur circuit produit utilisant diviser pour gagner sont particulier avec log obtient nsup log log donnons rapide sur quelques cas particuliers significatifs que nous allons traiter dans suite dans calcul des section nous avons naturelle qui conduit une famille circuits log log nous verrons peut encore borne sur taille dans multiplication des karatsuba section nous avons qui conduit une famille circuits nlog log dans multiplication rapide des matrices strassen section nous avons qui conduit une famille circuits nlog log enfin pour inversion des matrices triangulaires section nous avons qui conduit une famille circuits diviser pour gagner circuit binaire approche diviser pour gagner premier nous donne construction type particulier circuits taille profondeur dlog que appelle les circuits binaires balanced binary trees circuit binaire est circuit prenant une liste loi associative avec neutre donnant sortie produit peut supposer quitte liste par qui change pas circuit est divisant deux taille qui correspondent deux acceptant chacun une liste taille ces deux calculent respectivement les deux produits partiels ensuite produit multipliant ces deux produits partiels ainsi circuit binaire pour une taille est par sur pour est circuit trivial taille profondeur nulles pour circuit prend une liste longueur fait agir deux copies circuit pour calculer utilise pour final comme indique figure page note taille profondeur circuit obtient les relations avec qui admet solution exacte avec proposition circuit binaire qui prend une liste une liste dans donne sortie produit est circuit profondeur dlog est taille est une puissance calcul des figure construction circuit binaire partir circuit binaire cette taille est tous cas par lorsque est pas une puissance notons peut trouver une majoration meilleure taille pour calcul des une liste dont loi non commutative est multiplicativement dont neutre est calcul des consiste calculer les produits partiels pour solution donne circuit taille est taille minimum profondeur est facile voir que calcul peut pour obtenir circuit profondeur dlog diviser pour gagner peut toujours supposer dlog quitte liste par copies neutre peut deux taille qui seront calcul des pour liste calcul des pour liste pour solution principal est ensuite obtenue par multiplication produit faisant partie solution premier par les produits partiels des qui constituent solution second cette augmente par multiplications taille circuit profondeur pour cas par exemple prend pour avoir une puissance fait obtient circuit qui montre cette pour calcul des sept huit produits puisque notre les relations circuit calcul des pour donnent taille profondeur circuit correspondant calcul des pour une liste taille suffit effet faire pour calcul des pour obtenir log log ainsi calcul des pour une liste bien admet une solution log log encore utilisant principe brent proposition une solution qui est pram log ladner fischer obtiennent meilleur donnant une construction circuit log est que nous allons paragraphe suivant calcul des ladner fischer entier dans nous allons construire instar ladner fischer deux familles circuits tailles respectivement par profondeurs respectives dlog dlog qui calculent les cette construction fait conjointement partir circuit trivial une seule porte porte figure page suivante montre cette construction conjointe des deux familles construction famille circuit partir des circuits respectivement aux qui forment une partition liste comme calcule suffit effectuer une seule les multiplications par les sorties pour avoir les par tous les liste partant circuit trivial figure page illustre cette construction construction circuit quant elle fait partir circuit elle est par figure page diviser pour gagner figure construction des circuits pour dlog construction famille commence par calculer une seule les produits rang impair par suivant rang pair dans liste est pair est impair applique circuit pour obtenir sortie les longueur paire multiplie enfin les respectivement par les est impair pour obtenir plus est les autres longueur impaire calcul des figure construction des circuits ellement est impair obtient ainsi circuit partir circuit ajoutant maximum deux sortie comportant total est pair est impair les circuits page sont des exemples circuits pour quelques valeurs analyse des circuits note resp taille resp profondeur circuit pour cette construction donne les relations suivantes pour taille pour profondeur max diviser pour gagner etape sorties etape figure construction circuit partir circuit les lignes sont absentes est pair avec pour tout faut remarquer que dans peut stricte voir par exemple circuit dans les circuits page pour convaincre dans est par fait que dans circuit correspondant produit dont besoin pour calculer une les autres trouve exactement profondeur dans qui calcule produit est facile voir partir des par une sur que les profondeurs des circuits pour dlog dlog pour calculer les tailles des circuits partir des nous allons abord cas est une puissance faisant dlog posant avec pour les deviennent calcul des circuit circuits pour quelques valeurs posant les relations permettent que comme que est suite fibonacci par suite fibonacci est par relation pour tout diviser pour gagner qui donne lorsque est une puissance les majorations dans cas contraire est facile utilisant directement les relations obtenir par sur les majorations suivantes vraies pour tout qui donne suivant ladner fischer qui montre que calcul des est pram log log ladner fischer calcul des une liste dans non commutatif fait par circuit profondeur dlog taille aussi par circuit profondeur dlog taille multiplication rapide des introduction soit anneau commutatif unitaire anneau des une sur produit deux est par avec pour algorithme usuel pour calcul des coefficients correspond circuit profondeur log suppose taille avec multiplications additions dans anneau base pour cela donne algorithme log dans les trois sections nous exposons deux multiplication des dans section nous expliquons karatsuba facile pour importe quel anneau commutatif avec nlog log bien meilleur est obtenu log log transformation fourier ahu knu pour anneau auquel applique une telle transformation ceci fait objet des sections dans section nous exposons une due cantor kaltofen qui ont tout anneau commutatif unitaire exhibant algorithme log log log log avec multiplication rapide des nombre multiplications dans anneau base facteur log log augmentation nombre additions pour travail fallu adjonction racines principales anneau peut comparer borne obtenue avec meilleure borne actuellement connue qui est dans section nous donnons lien entre multiplication des celle des matrices toeplitz triangulaires nous concernant produit une matrice toeplitz arbitraire par une matrice arbitraire karatsuba deux arbitraires leur produit les sont chacun par coefficients leur produit peut appliquant directement formule qui alors multiplications additions les multiplications peuvent une seule calcul les coefficients sont ensuite dlog coefficient addition plus longue est celui une cette multiplication est adopter une sur fait que produit deux peut effectuer avec seulement multiplications lieu nombre passant effet peut calculer posant qui correspond circuit profondeur totale largeur profondeur multiplicative maintenant deux arbitraires leur produit ces unique sous forme avec sont avec coefficients alors sont avec coefficients karatsuba supposons programme kara calcule les coefficients produit deux arbitraires avec une profondeur multiplicative une profondeur totale une largeur nombre multiplications nombre donc avec pour nombre total utilisation des donne circuit kara que nous avons dans programme programme kara les coefficients dans anneau commutatif arbitraire deux sortie les coefficients produit des deux profondeur profondeur kara kara profondeur kara profondeur fin notez que ligne avec profondeur ligne des deux programmes kara kara qui ont avec les deux affectations sur ligne profondeur sur ligne avec profondeur affectation correspond ligne programme kara qui profondeur tandis que les deux autres affectations sont profondeur constate donc que lorsqu passe kara kara selon dans programme profondeur passe profondeur multiplicative pas multiplication rapide des largeur passe sup nombre multiplications est maintenant nombre est nombre total passe comparaison pour multiplication usuelle des nombre multiplications passe nombre nombre total veut minimiser nombre multiplications initialisera processus avec kara produit deux constantes mettra place les circuits successifs kara kara kara kara selon circuit kara est ensuite pour produit deux pour deux exactement aura ainsi circuit usuel qui utilise multiplications par circuit kara qui utilise nlog multiplications gain concernant nombre total est style notant pour passe effet les valeurs sont relation avec aide maple fait devient meilleur que partir pour des enfin concernant largeur circuit donne pour nous pouvons conclure avec proposition suivante proposition multiplication deux par karatsuba fait nlog log plus produit deux peut compte pas les substitution les multiplications par par qui reviennent fait des coefficients log transformation fourier usuelle par circuit profondeur multiplicative profondeur totale largeur nlog avec nlog multiplications notons que pour deux dont les sont compris entre obtient seulement les majorations suivantes appelant plus grand log pour profondeur nlog pour largeur nlog pour taille circuit remarquons aurait envisager une autre partition coefficients des pour une application savoir avec alors une sur cette partition produirait des circuits avec une estimation analogue pour qui concerne taille mais une profondeur log lieu log pour produit deux lorsque transformation fourier usuelle bien meilleur que nous exposons dans cette section suivante est obtenu log log transformation fourier pour anneau auquel applique une telle transformation transformation fourier que nous ici par sigle tfd est sur anneau commutatif unitaire pour entier condition disposer dans une racine principale pour dans anneau toute racine primitive est principale mais ceci mis dans anneau contenant des diviseurs dans anneau une racine primitive indicatrice euler dans les racines principales sont les nombres complexes tels que premier avec est tel que mais multiplication rapide des est clair que est une racine principale alors est transformation fourier ordre sur racine principale est application tfdn pour tout par tfdn est cette application peut aussi vue comme homomorphisme tfdn qui tout associe vecteur des valeurs aux points effet notant loi produit par est que tfdn tfdn tfdn tant application tfdn est dans les bases canoniques par matrice vandermonde plus est inversible dans anneau par son inverse alors matrice est inversible dans elle admet pour inverse matrice transformation fourier rapide dans cas modulo identification application tfdn est isomorphisme tfdn nous proposition supposons que anneau commutatif une racine principale que est inversible dans alors transformation fourier tfdn est isomorphisme modules tfdn par ailleurs identifie module source application tfdn avec choisissant exprimant sur base des alors tfdn isomorphisme munie multiplication des vers munie multiplication par bref pour deux modulo est algorithme multiplication rapide que nous explicitons dans section suivante transformation fourier rapide cas favorable dans proposition peut deux calcul produit une sur condition que anneau nous supposons une racine principale que est inversible dans alors proposition pour tfd ordre sur anneau traduit par car calcul modulo donne exactement calcul produit deux par tfd est dans algorithme page suivante lem suivant nous permet tout abord montrer comment une tfd ordre peut rapidement moyen une diviser pour gagner multiplication rapide des algorithme multiplication des via transformation fourier deux sur anneau convenable voir proposition sortie produit deux tfd ordre multiplications dans pour obtenir fourier calcul inverse une tfd ordre pour obtenir fin lem soit entier dlog transformation fourier ordre son inverse dans anneau une racine principale dans lequel est inversible font log log plus taille profondeur circuit correspondant sont respectivement par log log pour transformation directe par log log pour transformation inverse coefpreuve soit ficients dans dlog sorte que une racine principale agit calculer les valeurs aux points peut mis sous forme avec deg deg remarquons que est une racine principale que que pour comme aussi pour qui donne toutes les valeurs les points tfd ordre calcul suivant deux tfd ordre transformation fourier rapide multiplications par les avec une seule calcul suivies additions dans anneau base une seule respectivement taille profondeur algorithme ainsi obtient les relations suivantes valables pour tout entier qui donne par sommation sachant que comme par log que log que log pour tfd inverse ordre nous avons que cela signifie que peut les coefficients partir vecteur des valeurs aux points tfd ordre racine prinen effectuant sur vecteur cipale multipliant ensuite vecteur par par tfd inverse ordre peut faire par circuit taille profondeur algorithme qui introduit lem nous permettent estimer avec algorithme multiplication rapide des suivant anneau une racine principale dans lequel est inversible alors utilisant algorithme avec dans preuve lem multiplication deux coefficients dans fait aide circuit taille log profondeur log preuve supposons abord deux tfd ordre suivies une avec multiplications dans anneau base termine par une transformation inverse ordre multiplication rapide des preuve lem donne majoration taille par log profondeur par log dans cas faut remplacer par log par log rappelons que pour anneau par contexte nous notons nombre pour multiplication deux profondeur log nous dit donc log est inversible anneau des racines principales pour tout algorithme tfd rapide pour anneau commutatif arbitraire algorithme que nous venons est pas valable lorsque divise dans anneau puisque dans tel anneau division par peut pas unique lorsqu elle est possible peut essayer contourner cette par entier tel que divise pas dans lorsqu tel entier existe supposer dispose une racine principale dans faut encore disposer algorithme performant pour division par quand elle est possible pour pouvoir effectuer transformation fourier inverse outre tel entier existe pas pour radicalement cantorkaltofen dans est calculer uab vab avec deux entiers premiers entre eux puis utilisant une relation bezout entre par exemple prend calcule sans aucune division par formule est une racine principale calcule par formule est une racine principale reste obstacle taille qui consiste rajouter substitut formel lorsqu les pas transformation fourier rapide sous main dans anneau toute simple faire les calculs dans anneau est susbstitut formel donne pas effet une dans anneau correspond priori grosso modo dans qui annule transformation fourier cantor kaltofen pour est appliquer une diviser pour gagner peu semblable celle lem anneau description algorithme font appel aux cyclotomiques dont nous rappelons maintenant quelques cyclotomique est partir une racine primitive groupe multiplicatif cyclique des racines dans une par exemple dans avec cyclotomique est par est unitaire coefficients entiers dont les sont les racines primitives dont est est aussi les cyclotomiques outre les suivantes signifie que est diviseur positif pour tout nombre premier premier divise pas est impair particulier que est une puissance nombre premier sinon rajouter formellement une racine primitive dans revient anneau dans cet anneau multiplication rapide des une addition additions dans pour une multiplicaq tion peut travailler dans modulo puis obtenu modulo cette est ment peu car est unitaire qui peu coefficents non nuls cette remarque permet voir que les multiplications dans sont pas tellement plus que les additions elle donne une comment pourra une diviser pour gagner rendre peu les calculs dans anneau algorithme donne alors suivant existe une famille uniforme circuits profondeur log qui calculent produit deux coefficients dans anneau commutatif arbitraire avec log multiplications log log log remarque algorithme prend deux donne sortie calcule tout abord sont deux petits entiers premiers entre eux sont pas trop grands par rapport constante grand dans estimation log log log taille circuit calculant est ordre est premier ensuit utilisant les deux valeurs optimales algorithme devient plus performant que algorithme nlog que pour les valeurs qui sont ordre remarque multiplication rapide des est fait couramment analyse prenant des approximations des racines dans cela laisse supposer une efficace cette multiplication rapide est possible calcul formel avec des anneaux tels que anneau sur suffit effet faire calcul avec une suffisante pour que calcul soit garanti avec une meilleure que une autre solution voisine mais est plus facile serait faire calcul non dans mais dans anneau entiers adiques voir par exemple ser tel anneau contient une racine primitive est inversible produits matrices toeplitz produits matrices toeplitz nous signalons ici une matricielle produit deux module libre des muni base canonique des multiplication par resp resp est sur cette base par une matrice toeplitz triangulaire resp resp tab tab par exemple avec obtient produit qui est matrice toeplitz triangulaire dont colonne est par les coefficients produit inversement produit deux matrices toeplitz triangulaires dans peut comme produit deux encore comme produit dans anneau des par exemple bref pas significative entre produit produit matrices toeplitz triangulaires produit une matrice toeplitz triangulaire par vecteur multiplication rapide des voyons maintenant question produit une matrice toeplitz arbitraire par vecteur par exemple suffit matrice dans matrice multiplication par dans module libre des voit alors queple calcul produit par important suivant voit que produit par une matrice toeplitz est plus cher que produit par une matrice creuse proposition produit une matrice toeplitz une matrice arbitraire toutes deux ordre peut faire par une famille circuits log remarque plus supposons que dans anneau commutatif multiplication par soit alors produit une matrice toeplitz par une matrice est ceci est exemple des concernant les matrices toeplitz nous renvoyons lecteur par sujet ouvrage multiplication rapide des matrices introduction multiplication des matrices coefficients dans anneau commutatif unitaire fait objet multiples investigations durant les trente vue nombre dans calcul produit une matrice par une matrice borne asymptotique nombre est que est nombre multiplications essentielles qui asymptotique multiplication des matrices comme nous allons voir tout abord travers algorithme multiplication rapide strassen algorithme conventionnel dit usuel pour calcul produit cij une matrice aij par une matrice bij fait par mnp multiplications additions calculant une seule les mnp produits aik bkj calculant ensuite dlog les sommes cij intervenant dans les formules cij aik bkj pour particulier pour multiplication deux matrices ordre cet algorithme correspond circuit taille profondeur dlog avec multiplications additions dans premier temps les investigations portaient sur diminution nombre multiplications essayant coefficient sans occuper exposant est winograd qui premier coefficient mais doublant presque multiplication rapide des matrices nombre additions qui constitue prix dans asymptotique sait que dans une large classe anneaux multiplication est beaucoup plus que addition beaucoup pensaient que winograd serait optimal sens que multiplications seraient pour calcul produit deux matrices voir knu page mais une plus tard strassen montra que pouvait multiplier deux matrices utilisant seulement multiplications sur fait simple que produit deux matrices coefficients dans anneau non commutatif pouvait avec seulement multiplications lieu nombre additions passant donna les relations prouvant fait dans son fameux article gaussian elimination optimal winograd donna peu plus tard une variante multiplication rapide strassen avec seulement additions comme ces relations utilisent pas multiplication elles appliquent calcul produit deux matrices quelconques coefficients dans selon diviser pour gagner section est une analyse multiplication rapide des matrices dans version nous construction famille circuits qui correspond version originale strassen comme dans section dans section nous montrons que inversion des matrices triangulaires fortement peut par des circuits avec une taille ordre que les circuits multiplication des matrices une profondeur ordre lieu log dans section nous introduisons les notions multiplicative rang tensoriel nous montrons central par notion rang tensoriel dans asymptotique multiplication des matrices strassen nous montrons qui dit que exposant multiplication des matrices que corps base conjecture fait que cet qui est pas vrai par exemple dans corps des fractions rationnelles addition est plus analyse strassen exposant est pour tous les corps pour anneau des entiers relatifs dans section nous nous attaquons des algorithmes nettement plus qui appuient sur notion calcul approximatif introduite par bini leurs performances asymptotiques aucun des algorithmes cette section semble devoir sur machine dans proche avenir nous pourtant que serait crime contre que pas moins partie les fascinantes qui sont nous avons cependant pas laser due strassen bcs car nous avons pas comment donner une assez exacte termes suffisamment simples cette conduit meilleure borne connue pour exposant multiplication des matrices estimation actuelle cet exposant est winograd coppersmith analyse strassen strassen version winograd dans anneau non commutatif deux matrices avec alors matrice peut obtenue par calcul suivant ces relations strassen version winograd anneau des matrices ordre calcul produit deux multiplication rapide des matrices matrices celui sept produits matrices sommes matrices type analyse faite section montre que passage multiplications est avantage nombre des additions par ailleurs cela tient que est dans diviser pour gagner tandis que nombre additions intervient que dans constante pour partant initial une part les type autre part solution initial partir des solutions des proposition page posant les aij bij cij sont des matrices programme comportant les instructions suivantes dans lesquelles les affectations des variables correspondent aux multiplications celles des variables cij correspondent aux avec indication des calcul une matrice programme donne circuit taille profondeur dans anneau les relations est par fait que les que des additions matrices ont une profondeur les additions correspondantes dans faisant alors que comprenant les multiplications matrices etape est profondeur utilisant algorithme usuel pour multiplication deux matrices peut dlog dans suite nous dirons simplement additions signalons que pour version originale strassen avec additions page profondeur relation analyse strassen algorithme multiplication matrices par blocs les multiplications fin qui donne dlog comme pour profondeur circuit correspondant calcul produit deux matrices prend version originale strassen donne dlog dlog concernant taille dans donne successivement multiplication rapide des matrices qui donne comme pour taille circuit correspondant calcul par strassen variante winograd produit deux matrices ainsi dlog dlog particulier est une puissance comme log nlog dlog obtient dlog seulement pour version originale strassen mais coefficient nlog dans peut lorsque est une puissance effet peut directement que nombre dans multiplication usuelle des matrices nlog pour pose log sorte que des donne alors nlog puisque remplace par nlog ceci conduit donc suivant strassen mais dans lequel nous version avec additions winograd multiplication deux matrices coefficients dans anneau arbitraire est dans classe nlog log plus lorsque est une puissance elle fait soit avec circuit dont taille profondeur sont respectivement par nlog dlog soit par circuit dont taille profondeur sont respectivement par nlog dlog version originale strassen donne log analyse strassen notez aussi que profondeur multiplicative ces circuits est fait conclusion dans est non seulement existe une famille circuits dans classe nlog log qui multiplication des matrices mais sait construire explicitement une famille uniforme tels circuits ceci est objet paragaphe qui suit avec exemple construction uniforme une famille circuits nous allons maintenant tenir une promesse que nous avions faite dans section celle analyser exemple construction uniforme typique une famille circuits pour laquelle production circuit famille pas ordre grandeur bien taille nous utiliserons pour cet exemple multiplication rapide des matrices originale strassen qui repose sur calcul suivant dans anneau non commutatif deux matrices avec alors matrice peut obtenue par calcul suivant qui multiplications ceci peut sous forme circuit profondeur concernant les variables note pour aij pour bij obtient programme page suivante que nous appelons strassen consiste utiliser ces formules doit multiplier des matrices lignes colonnes les partitionne chacune matrices multiplication rapide des matrices programme produit deux matrices ordre sur anneau non commutatif strassen les coefficients dans anneau arbitraire deux matrices ordre sortie les coefficients produit profondeur profondeur les multiplications profondeur profondeur fin lignes colonnes qui jouent des aij bij dans les formules obtient circuit profondeur log comportant mlog multiplications usuelle donne circuit profondeur comportant multiplications notre est temps pour programme correspondant supposons ait programme pour multiplication deux matrices lignes colonnes avec les avec les sorties analyse strassen comment programme les sont maintenant avec notons les matrices extraites lignes colonnes avec commence par programme aux matrices les matrices pour moyen des affectations matricielles cela signifie dans anneau base avec pour profondeur multiplication rapide des matrices ensuite les matrices pour pour cela agit fois avec chaque fois une convenable programme pour avec les transformations suivantes les variables sont par les variables les variables sont par les variables toute variable dans avec une profondeur est par variable particulier obtient sortie les variables profondeur qui sont les coefficients des matrices reste enfin les affectations matricielles profondeur profondeur cela signifie avec profondeur profondeur programme qui pour donne sortie texte programme est programme type loop program programme boucles pour pour faire structure simple lorsqu sous forme une machine turing texte gestion des boucles occupe temps par rapport aux instructions qui permettent successivement faut que fin texte doit sur une bande sera pendant inversion des matrices triangulaires car durant cette bande sera par est temps pour taille obtient les formules suivantes les sont des constantes puisque est devant nous pouvons comme suit rappelons que nous notons log pour max lorsqu utilise strassen pour construire une famille circuits pour multiplication des matrices ordre peut construire une machine turing qui code programme temps ordre grandeur que taille sortie mlog naturellement comme habitude sur temps calcul mlog est encore valable lorsque est pas une puissance les matrices dans par des lignes colonnes inversion des matrices triangulaires les notations que nous maintenant concernant multiplication des matrices seront dans toute suite ouvrage quand nous aurons faire des calculs notation nous supposerons que calcul produit deux matrices fait par circuit taille profondeur log largeur log sont des constantes positives certains calculs dans suite ouvrage conduiraient des formules pour les cas est raison pour laquelle nous avons exclure cette valeur qui est toute pas qui est pour multiplication rapide strassen pour toutes les autres multiplications rapides connues est pas non plus restrictive simplifie quelques calculs multiplication rapide des matrices approche diviser pour gagner donne algorithme qui montre que inversion une matrice triangulaire inversible autrement dit fortement admet une solution avec une constante asymptotique ordre pour taille ordre pour profondeur circuit proposition soit anneau arbitraire entier une matrice triangulaire inversible alors inverse peut par une famille uniforme circuits taille profondeur log preuve peut toujours supposer dlog quitte rajouter lignes colonnes matrice remplir partie restante par matrice remplacer matrice par qui revient matrice pour tous entiers naturels matrice nulle lignes colonnes calcul est inversible celui puisque dans cas est inversible ainsi par matrice peut comme une matrice elle est triangulaire avec laires donc est fortement sont fortement plus calcul fait avec circuit tique taille profondeur ensuite matrice partir calculant produit trois matrices qui donne les relations vraies pour tout avec obtient par sommation lorsque avec log log ici sur ligne pour cas remplace par log par log obtient les majorations log soit corps trois espaces vectoriels dimensions finies rappelons une application vers est une application qui est retour sur les les dans section sous une forme nous isolons les multiplications ici nous avons avec trois matrices les comme des formes celles comme des formes celles comme des formes ces formes sont sur espace des matrices ordre sur anneau les affectations qui produit multiplication rapide des matrices ont par autres affectations avec avantage avoir que multiplications analyse nous que passage avantage nombre des additions par ailleurs nous avons formes sur espace vit matrice formes sur espace vit matrice les produits les comme combinaisons des nous appelons base canonique espace vit matrice nous pouvons les sont des combinaisons suivantes des cij bilan des courses formes formes vecteurs peu plus savantes ceci utilisant notation tensorielle application correspond tenseur suivant premier membre provient directement cik peut choix que ces tenseurs appartiennent espace tensoriel abstrait construit partir des trois espaces vivent les matrices bien ils sont dans espace des applications vers dans dernier cas tenseur est par application rang tensoriel une application plus corps trois espaces vectoriels dimensions finies soient des bases notons les bases duales toute application vers est alors une somme tenseurs est par les images elle donne pour les vecteurs des bases canoniques obtient ipso facto les peuvent les sur les trois bases important point vue calcul sont les manipulation des tenseurs qui disent droit utiliser comme importe quel produit utilisant par rapport chacune des mais pas peut par exemple supprimer les symboles calculer avec des variables formelles place des condition pas autoriser commutation deux variables entre elles par contre elles commutent avec les objet abstrait correspondant calcul appelle anneau des non commutatifs coefficients dans rang tensoriel une application soient corps trois espaces vectoriels dimension finie note bil espace des applications vers soit bil appelle rang tensoriel plus petit entier tel que puisse sous forme les sont dans les sont dans les sont dans autrement dit encore est plus petit entier tel que puisse comme trois applications selon format suivant sont des applications est produit par programme correspondant appelle calcul rang tensoriel est encore nous noterons importance rang tensoriel dans les questions par gastinel strassen multiplication rapide des matrices remarque nous laissons libre choix pour tenseur pour les gens savants cet objet vit dans espace tensoriel abstrait canoniquement isomorphe espace des applications mais peut aussi que cet objet est par application remarque lecteur lectrice peut donner analogue pour rang tensoriel une application celui une forme retrouve notion usuelle rang pour ces objets remarque nous pourrions remplacer dans corps par anneau commutatif arbitraire condition des espaces convenables analogues aux espaces vectoriels une est que doivent des modules libres des modules isomorphes par exemple pour produit matriciel cadre plus naturel serait choisir travailler sans aucune sur anneau remarque contrairement rang une application rang tensoriel une application est difficile semble pas connaisse algorithme qui travail sauf pour quelques classes corps particuliers les corps finis les corps clos par exemple mais dans ces cas les algorithmes sont impraticables rang tensoriel des applications sur corps fini est pcomplet prenant pour entier une application par ses sur trois bases est rang tensoriel est non rang tensoriel multiplication des matrices notation tensorielle pas seulement avantage elle aussi nous aider sur les calculs mis meilleure strassen est dire que miracle est produit quand tenseur multiplication des matrices ordre comme somme tenseurs chaque fois arrive multiplication des matrices ordre comme une somme tenseurs avec une bonne valeur log log obtient que multiplication des matrices tombe dans classe nlog log log car calcul section pourra fonctionner identique plus effectuer des produits matrices ordre par blocs taille implique que produit matriciel est par une somme tenseurs outre profondeur programme correspondant produit des matrices ordre est entier alors celle programme correspondant produit des matrices ordre est profondeur multiplicative profondeur tenant compte que des multiplications essentielles page est depuis strassen nouveau sport auquel ont quelques grands noms faire diminuer log log des pour des valeurs plus plus grandes aspect fascinant notation tensorielle pour les applications est elle une entre les trois espaces jeu rappelons agit ici espaces matrices qui est pas directement visible sur fait aurait vraiment que nous notre tenseur comme application tensorielle permet traiter des arguments sous forme scripturale pour montrer que jeu est bien plus jeu prenons nouveau les que nous avec des tenseurs nous marquons pas entre formes vecteurs cela donne alors pour produit matriciel aij bjk cik multiplication rapide des matrices avec pour une par permutation circulaire dans les indices dans les cik alors invariance par permutation circulaire nous pouvons remplacer partout par finalement nous permutons nouveau les indices dans les nouveaux cik pour revenir ceci nous donne autres qui peuvent tout aussi bien servir que les naturellement dans cas obtient seulement sans fatigue nouveau pour traiter produit matriciel mais nous parti produit matrices rectangulaires non permutation circulaire deviendrait outil vraiment efficace produisant des correspondant cas figure vraiment nouveau cette remarque importante remonte notez que nous avons une situation analogue nous cas des applications vers nous dit que passage est isomorphisme termes matrices est une banale transposition termes tensorielle est jeu les tenseurs remplacent les matrices lorsqu plus que deux espaces cause notation rang tensoriel multiplication des matrices soient trois entiers corps note pik application note donc rang tensoriel par doit proposition rang tensoriel multiplication des matrices alors phm hmi est invariant par permutation des entiers preuve point est facile peut des matrices correspondant format par des pour faire des matrices format les points faire des produits matrices par blocs point avant proposition voir page peut redire peu chose sous forme suivante peu plus abstraite qui mieux essence sont trois espaces vectoriels dimensions finies alors multiplication des matrices correspondante est canonique espace bil hom hom hom bil nous notons espace des formes sur nous avons isomorphisme canonique bil dans situation aussi une canonique entre hom hom sous forme matricielle par qui fournit isomorphisme canonique entre hom une fois mis bout bout tous ces isomorphismes canoniques voit que canonique bil hom hom hom correspond canonique hom hom hom sous forme matricielle par abd maintenant est bien connu que abd bda dab abd abd ceci les multiplication rapide des matrices voyons maintenant point nous reprenons les notations avec donc identifie hom regardons espace bil sous forme canoniquement hom hom produit une matrice par vecteur colonne sous forme voit que application correspondante vers hom est nulle sur ker mais dans cas modulo les identifications cette application est autre que application identique son noyau est donc est moins dimension exposant multiplication des matrices dit que est exposant acceptable pour multiplication des matrices peut log borne des exposants acceptables est exposant multiplication des matrices elle est priori devrait mettre indice corps pour les exposants les concernant ces exposants dont nous rendons compte sont cependant corps rang tensoriel exposant multiplication des matrices existe tels que alors exposant log log est acceptable pour multiplication des matrices existe tels que alors exposant loglogmnp est acceptable pour multiplication des matrices preuve comme nous avons point calcul que celui fait dans section point point puisque les points proposition avec mnp fait conclusion dans est non seulement existe une famille circuits dans classe log qui multiplication des matrices mais sait construire explicitement une telle famille uniforme circuits outre temps construction circuit est proportionnel taille selon les lignes preuve point peut comme suit par calcul section proposition pour supposons que application puisse par circuit profondeur contenant multiplications essentielles autres addition soustraction multiplication par une constante avec alors application peut par circuit deur contenant multiplications essentielles autres versus multiplicative soient corps deux espaces vectoriels dimension finie une application quadratique vers est par une application forme bil sont des bases revient dire que chaque est une forme quadratique second les les sont prises comme variables peut alors les programmes sans division qui permettent calculer les multiplicative est alors comme plus petite longueur multiplicative tel programme nous noterons comme les changements base rien longueur multiplicative cette pas choix des bases lem suivant est une paraphrase proposition dans cas une application quadratique lem avec les notations multiplicative une application quadratique est aussi plus petit entier tel que puisse comme trois applications selon multiplication rapide des matrices format suivant sont des applications est produit par programme correspondant cette appelle calcul quadratique remarque des circuits avec division pourrait pas diminuer pour autant longueur multiplicative pour une application quadratique moins dans cas corps infini proposition proposition soient corps trois espaces vectoriels dimension finie soit bil alors est une application quadratique vers multiplicative sont par preuve est pour seconde programme quadratique comme dans lem qui calcule avec multiplications essenteielles donc les sont dans remarquons puisque est peut supprimer les termes dont somme est nulle chose avec finalement obtient ceci montre que suivant qui relie rang tensoriel exposant multiplication des matrices exposant multiplication des matrices est borne des exposants qui pour moins entier aussi lim log log lim log log chaque suite converge vers borne preuve est clair proposition que les deux suites ont borne direct plus dans tout exposant est acceptable strictement borne des log rloghn pour multiplication des matrices pour pour assez grand programme sans division longueur qui calcule application quadratique fortiori longueur multiplicative est relation entre est seulement rang tensoriel qui permet les base concernant exposant multiplication des matrices cela tient que proposition serait pas vraie rang tensoriel par longueur multiplicative fait interdire commutation dans les tenseurs est qui permet traiter correctement produit des matrices par blocs extension corps base soient corps trois espaces vectoriels dimension finie soit bil est une extension peut naturelle nous nous tiendrons ici point vue pragmatique purement calculatoire sont des bases nous trois espaces vectoriels ayant les bases extension est par comme tout calcul dans est aussi calcul dans mais peut multiplication rapide des matrices que utilisation constantes dans puisse faciliter calcul peut stricte nous allons cependant voir dans paragraphe que exposant multiplication des matrices peut pas changer lorsqu passe corps une extension lem avec les notations est une extension finie existe entier tel que pour toute application particulier exposant multiplication des matrices change pas lorsqu passe corps des fractions rationnelles est infini preuve dans cas famille finie des constantes dans par circuit peut par des constantes est choisi annuler aucun des dans cas une base lorsqu voit comme espace vectoriel multiplication dans une application sur lorsqu elle est traduite dans les sur base cette application peut par multiplications essentielles dans fait constante peut prise rang tensoriel cette application qui est tout calcul dans peut alors par calcul dans suivante chaque variable sur est par variables sur qui les sur base seules les multiplications essentielles calcul dans produisent des multiplications essentielles dans dans cette simulation nombre multiplications essentielles est par constante maintenant existe entier tel que donc pour une puissance convenable donc suivant proposition exposant multiplication des matrices sur corps que preuve suffit prouver que exposant change pas lorsqu passe corps premier des une ses extensions calculs approximatifs supposons ait ait sur corps une cik peut les des sur les bases cij comme des deux tenseurs signifie que ces polynomiales dont tous les coefficients sont maintenant lecteur lectrice beau suivant lorsqu polynomiales sur corps admet une solution dans une extension alors admet une solution dans une extension finie est donc premier cas lem remarque exposant peut donc prenant pour lorsqu affaire corps clos rang tensoriel est calculable principe sinon pratique car savoir revient admet non une solution comme dans preuve proposition sait principe genre questions par algorithme sait cependant pas grand chose concernant cet exposant mythique est nombre compris entre mais sait apparemment toujours rien sur vitesse avec laquelle suite log log converge vers pourrait que vitesse convergence soit lente que nombre serait impossible calculer par des calculs approximatifs bini des calculs approximatifs est des elle par bini elle admet nombreuses preuves dont certaines tout fait explicites essentiellement est peut faire nullstellensatz hilbert lem normalisation noether encore des bases trouve dans les bons livres multiplication rapide des matrices quelque avec des divisions strassen elle situation pour exposant multiplication des matrices exemple est produit deux matrices avec qui son coefficient nul produit sous forme suivante figure produit matriciel trous correspond figure produit matriciel trou bini tenseur rang qui avec notation des non commutatifs introduit pour les variables tenseur rang par des xij avec lorsqu obtient donc lorsque est suffisamment petit dit que constitue une approximation ordre peut transformer ceci calcul purement formel dans anneau des ordre comme lorsqu les divisions strassen naturellement pas miracle cela donne pas une comme somme tenseurs mais quelque chose gagner prenant peu recul analysant qui passe tout abord appliquant suivant fait produit par blocs non rectangulaires constate que produit matriciel peut approximative ordre par une somme tenseurs lieu produit matrices par blocs rectangulaires pourra alors approximative ordre calculs approximatifs figure produit matriciel plein convenable nous allons cela peu plus loin comme somme tenseurs lieu des dans usuelle enfin reste que lorsqu passe produit par blocs ordre approximation pas trop vite que calcul approximatif calcul exact devient devant pour importe quel tout ceci exposant log log log log nous devons maintenant donner des plus pour que plan travail fonctionne bien soient corps trois espaces vectoriels dimensions finies soit bil soit anneau des variable sur bil est une approximation ordre modulo calcul est calcul approximatif ordre appelle rang tensoriel marginal ordre plus petit rang possible pour calcul approximatif ordre note enfin rang tensoriel marginal est plus petit des est nous dirons aussi plus simplement rang marginal remarque nous utilisons ici des calculs sur anneau comme dans remarque extension anneau fait comme dans cas une extension corps base page remarque est clair que lorsque augmente rang tensoriel marginal ordre une application peut que diminuer multiplication rapide des matrices autrement dit rang tensoriel marginal ordre une application est priori nettement plus difficile calculer que son rang tensoriel rang marginal est encore plus difficile fait est satisfait quand une bonne majoration rang marginal explicitant calcul approximatif quel est calcul approximatif ordre nous avons fait calcul analogue dans preuve proposition qui concernait une mise forme des circuits sans division commence par que calcul passe non pas sur anneau mais sur anneau des ordre ensuite simule toute variable qui modulo par variables dans qui les coefficients dessous quand doit calculer coefficient dans tenseur voit doit faire somme des pour tous les triplets dont somme vaut plus triplets type termes rang tensoriel cela signifie donc que rang tensoriel est par fois son rang marginal ordre nous avons donc lem suivant lem soit une application sur corps calcul approximatif ordre une par calcul bref maintenant nous devons examiner comment comporte rang marginal produit matriciel lorsqu utilise des produits par blocs proposition rang tensoriel marginal multiplication des matrices rang marginal est une fonction croissante chacun des entiers entier calculs approximatifs particulier avec mnp hmi est invariant par permutation des entiers preuve tout passe comme avec rang tensoriel usuel dans preuve proposition seul point qui demande peu attention est point meilleure comprendre est encore une fois prendre recul faut prendre recul sur que tenseur par rapport aux tenseurs lorsque nous voyons une matrice type comme une matrice type ayant pour des matrices aij type nous une grosse matrice par deux paires indices correspondant couple indices comme dans exemple par figure avec figure par blocs cependant mise ligne paire sous forme elle est indispensable dessin une des choses est obstacle pour qui concerne calcul que produit par blocs prenons multiplication rapide des matrices effet les indices dans grande matrice sous forme des couples comme dans figure non pas non plus nous obtenons notation non commutatifs somme est prise sur pour des ensembles indices convenables alors nous avons nous avons mis des pour cas les ensembles indices dans premier tenseur seraient pas disjoints ceux second condition respecter les calcul suivantes vaut pour idem avec une fois ceci nous avons plus besoin penser calcul que produit par blocs nos nouvelles calcul fonctionnent toutes seules produisent automatiquement aussi bien dans proposition que dans proposition nous sommes effet maintenant constatation banale suivante concernant les premier terme non nul produit deux est produit des premiers termes non nuls chacun des deux les ordres des deux ajoutent raisonnement fait paragraphe tenant compte proposition lem donne alors bini existe tels que alors log log existe tels que alors mnp loglogmnp calculs approximatifs bref pour qui concerne exposant une donne une pourra remarquer que preuve est tout fait explicite connait calcul approximatif ordre qui utilise multiplications essentielles pour produit matriciel loglogmnp alors sait construire entier calcul pour produit matriciel qui utilise moins multiplications essentielles corollaire pour exposant multiplication des matrices log log une bini pas dans premier temps une importante exposant mais elle ouvert voie aux suivantes beaucoup plus substantielles dans strassen remplace pour calculer produit matriciel calcul avec multiplications correspondant produit par calcul avec seulement multiplications essentielles obtient log log dans bini utilise produit matrices trous dans lesquel les multiplications qui interviennent dans produit matriciel peuvent dans calcul approximatif par seulement multiplications essentielles cependant lieu aboutir log log comme dans strassen abouti log log avait quelque chose immoral obtenu dans travail voir suivante dans produit matrices trous est capable remplacer dans calcul approximatif les multiplications qui interviennent dans produit matriciel par seulog lement multiplications essentielles alors log particulier log log reste paragraphe est preuve selon les lignes preuve est faite sur corps infini qui est proposition plus simple est commencer sur exemple nous allons voir directement sur exemple bini quelle est machinerie mise par multiplication rapide des matrices strassen donne des produits matriciels trous successifs type suivant figures produit matriciel trous une fois figure figure bini une fois peut obtenu par calcul approximatif ordre rang lieu ceci comme point dans proposition produit matriciel trous deux fois figure figure bini deux fois peut obtenu par calcul approximatif ordre rang ceci aussi comme point dans proposition plus preuve donne proposition notons application qui correspond produit matriciel trou certaines sont nulles les autres sont des variables notons produit matriciel trou obtenu fois produit alors calculs approximatifs dans produit par blocs deux produits matriciels trous note produit matriciel trous que obtient les revenons notre exemple dans produit figure nous pouvons les colonnes matrice qui contiennent chacune les lignes qui contiennent obtient produit trous suivant figure point vue calcul approximatif cette extraction lignes colonnes revient simplement remplacer des variables par des donc peut que simplifier nous figure produit trous extrait bini deux fois maintenant une matrice application est fait une application entre deux espaces vectoriels dimension admettons moment que les coefficients peuvent choisis que soit isomorphisme lem compression posant voit que produit matriciel sans trou est sous forme par calcul approximatif ordre rang multiplication rapide des matrices pour obtenir sans aucune multiplication essentielle puis calculer plus nous pouvons produit trous obtenu fois processus bini matrice est plus plus creuse dans chaque colonne nombre est une puissance les colonnes ayant nombre disons colonnes avec non nulles obtient produit trous format chaque colonne exactement appliquant lem compression nous choisissons une matrice convenable nous par nous obtenons produit matriciel sans trou sous forme par calcul approximatif ordre rang quel est comportement asymptotique calcul peut facilement convaincre que produit est obtenu comme des termes selon formule cela tient que matrice trous initiale une colonne deux une autre une comme formule est une somme termes plus grand ces termes est certainement donc par choix optimal nous obtenons donc appliquant proposition hnk appliquant lem hnk qui donne bien par passage limite log log avant passer preuve dans cas nous montrons lem compression lem lem compression soit aij une matrice trous format dont les sont bien nulles bien des variables nous supposons que matrice variables nulles dans chaque colonne les variables dans corps obtient espace vectoriel calculs approximatifs dimension suppose corps infini alors existe une matrice telle que application soit bijective preuve les colonnes sont les unes des autres chaque colonne matrice gardant que les non nulles est selon est une matrice extraite gardant que colonnes ensuit que application est bijective seulement les matrices sont inversibles pour cela suffit que colonnes distinctes soient toujours combinatoire admet toujours une solution sur corps ayant suffisamment construit une matrice convenable avec colonnes pour rajouter une colonne faut choisir vecteur dehors des hyperplans par importe quel colonnes extraites passons maintenant preuve cas nous supposons que nous avons produit matrice trous par exemple style suivant figure qui peut par calcul figure exemple arbitraire produit matriciel trous matif supposons que les colonnes successives nombre contiennent respectivement supposons que les lignes successives contiennent respectivement priori produit trous multiplications tenseur qui correspond est une somme tenseurs dans exemple multiplication rapide des matrices supposons calcul approximatif ordre rang permette produit trous fois calcul approximatif obtient nouveau produit matrices trous par exemple pour obtient produit trous suivant figure une colonne doit par figure exemple une fois une telle colonne contient alors mjk mut non nulles chaque est nombre des une ligne doit par elle contient njk non nulles parmi toutes les colonnes toutes celles qui fournissent une certaine liste exposants particulier elles ont toutes nombre mur non nulles avec nombre des colonnes question est coefficient multinomial nous parmi les lignes toutes celles correspondant aux indices qui sont des elles ont toutes nombre non nulles nut nous obtenons cette produit matrice trous comme les colonnes ont toutes nombre non nulles peut utiliser lem compression chose pour tenant compte fait que toutes les lignes ont nombre calculs approximatifs non nulles nous obtenons produit matriciel sans trou type qui est par calcul approximatif ordre rang quoi est est des termes multinomial choisit terme plus grand dans cette somme obtient donc car termes dans cette somme termine comme dans cas particulier hmk hmk log par passage limite appliquant log remarque dans indique des produits matriciels trous avec rang marginal plus avantageux que celui bini qui donne mais dernier est par formule asymptotique obtient ensuite que nous exposons dans paragraphe suivant remarque dans lem est possible remplacer par avec cette est uniquement pour des entiers grands que bini aussi bien que celle fournissent meilleur calcul pour que celui qui originale strassen ces sont donc pas sur machine sommes directes applications approfondissant son analyse des produits matrices trous que certains produits type figure permettent construire partir calcul approximatif des calculs exacts donnant meilleur exposant pour multiplication des matrices que celui dans exemple figure correspond somme disjointe peut dire aussi somme directe encore juxtaposition des deux applications somme directe multiplication rapide des matrices figure somme directe deux produits matriciels deux applications est application par point vue des calculs calcul possible pour somme disjointe consiste faire seulement les deux calculs avec toutes les variables distinctes notation note somme directe des applications note pour somme directe exemplaires fait alors les remarques suivantes premier lem est fois simple crucial lem supposons alors preuve application peut comme produit par blocs chacune des deux matrices multiplie blocs format les multiplications correspondantes type qui sont priori pour produit par blocs peuvent par seulement produits entre combinaisons convenables des blocs selon fourni par calcul qui montre lem particulier avec mnp particulier avec mnp calculs approximatifs preuve est toujours produit par blocs avec les produits matriciels trous correspondants peut agit cas particulier proposition proposition suivante qui proposition existe tels que alors mnp log log mnp autrement dit pour qui concerne exposant donne une preuve appliquant lem obtient avec mnp puis aussi donc par passage limite cela nous cas des entiers tels que veut alors montrer log log log posons log supposons tout abord connaisse calcul qui montre que posons log log est exposant acceptable rien faire lem nous dit que donc exposant log log log log log est acceptable pour multiplication des matrices calcul simple montre alors que donc nous avons situation passant nous voyons maintenant travail qui nous reste faire primo montrer que avec entier cela est pas trop grave car peut utiliser avec rapport leurs logarithmes aussi proche veut donc par lem qui conduit exposant acceptable log log log log log multiplication rapide des matrices avec aussi proche veut secundo montrer que recommence les exposants successifs obtient convergent bien vers nous ferons pas travail car les techniques deviennent vraiment trop lourds conjecture additive strassen une conjecture strassen est que est fait toujours une appelle cette conjecture conjecture additive pour rang tensoriel des applications bien que plausible cette conjecture par qui que variante avec rang tensoriel marginal palce rang tensoriel est fausse lem conjecture additive est vraie seulement pour arbitraires preuve proposition est car directement mais cela fournirait les calculs que capable trouver calcul rang convenable pour partir calcul pour lem pour preuve nous montrons seulement nous produit par non commutatif produit par non commutatif pour simplifier les qui suivent nous prenons avec nous introduisons outre les notations sorte que calculs approximatifs alors non commutatif suivant qui correspond calcul approximatif avec multiplications essentielles qui une fois donne asymptotique revenons produit trou figure qui juxtaposition nous une fois strassen produit trous nous obtenons nouveau produit trou correspondant figure qui peut par changement figure somme directe une fois deux produits matriciels des lignes colonnes produit trou qui correspond figure nous voyons clairement que cela signifie multiplication rapide des matrices figure somme directe une fois lecteur lectrice est par une fois que avec avec formule cette est pas hasard est bien machinerie combinatoire qui est dans les deux cas fois obtiendra indique une somme disjointe applications fait nous avons une formule les sommes indiquent des sommes disjointes applications isomorphisme correspond une organisation convenable des lignes colonnes produit matriciel trous correspondant premier membre hmi mui nui pui somme est prise sur tous les tels que formule asymtotique suivante calculs approximatifs formule asymptotique supposons ait hmi alors obtient pour exposant multiplication des matrices preuve notons abord que donne log log donc log log appliquant formule proposition obtient mui nui pui pour choix particulier nous notons ceci sous forme hmk qui nous donne proposition log log quel est choix optimal nous mui nui pui somme droite termes donc pour plus grand entre eux obtient qui donne log log log log par passage limite log est log multiplication rapide des matrices corollaire exposant multiplication des matrices preuve applique formule asymptotique avec somme disjointe lem rapide introduction une importante multiplication rapide des matrices est recherche calcul permettant ramener les classiques une ordre que celle multiplication des matrices bien que nous utilisions multiplication rapide des matrices qui est obtenue par algorithme bien les algorithmes obtenus dans chapitre sont pas bien leur profondeur est qui explique titre chapitre rapide nous avons section que inverse une matrice triangulaire ordre peut calculer par une famille uniforme circuits taille profondeur nous allons dans chapitre montrer que pour autant travaille sur corps ait droit division des familles circuits ayant des tailles voisines peuvent construites pour les principaux sur corps mais dans tous les algorithmes que nous exhiberons temps profondeur circuit est plus polylogarithmique outre comme sont des circuits avec divisions ils peuvent pas sur toutes les nous donnerons une version sous forme algorithme avec branchements les branchements sont par des tests dans corps dans ces algorithmes qui correspondent plus des circuits ceci est division est pas trop termes binaire rapide proprement dits nous aurons pour temps temps des estimations voisines celles obtenues pour les circuits avec divisions par exemple calcul inverse une matrice elle est inversible peuvent par une famille uniforme circuits avec divisions taille voir section ceci est une algorithme bunch hopcroft pour lup que nous dans section cet algorithme naturellement sous forme algorithme avec branchements qui concerne calcul plusieurs algorithme frobenius section assez ont mises point par algorithme avec branchements qui utilise temps log une rapide pour mise forme lignes une matrice arbitraire ceci est dans les sections dans section nous quittons cadre sur les corps mais nous restons dans celui multiplication rapide des matrices nous kaltofen algorithme probabiliste wiedemann efficace pour les matrices creuses sur des corps finis elle donne meilleur temps actuellement connu pour calcul adjointe une matrice sur anneau commutatif arbitraire algorithme utilise multiplication rapide des celle des matrices contrairement algorithme wiedemann celui kaltofen cependant pas encore fait objet une satisfaisante les algorithmes dans chapitre sont plus rapides que les algorithmes usuels chapitre encore malheureusement loin pratique fait seule forme multiplication rapide des matrices celle strassen correspondant log commence outre pratique autres algorithmes multiplication rapide des matrices les coefficients pour meilleures valeurs sont trop grands leur efficace que pour des matrices tailles astronomiques algorithme bunch hopcroft algorithme bunch hopcroft pour des matrices surjectives dans section nous avons algorithme page qui est algorithme usuel par pivot gauss pour lup des matrices surjectives lup que nous allons ici fait appel multiplication rapide des matrices cette que nous noterons lup est due bunch hopcroft algorithme bunch hopcroft prend une matrice rang donne sortie triplet tel que est une matrice unitriangulaire une matrice triangulaire fortement une matrice permutation lup lupn pour est une matrice ligne rang existe donc non nul occupant place cette ligne suffit prendre est matrice permutation ordre correspondant des colonnes donc lup pour matrice ainsi supposant vraie pour tout entier compris entre pour dlog pose pour obtenir lup avec partition suivante matrice est une matrice surjective sont commence par appeler lup qui donne une lup alors les partitions suivantes des matrices triangulaire inversible puisque est fortement posant que rapide comme matrice satisfait elle est surjective puisque est peut appliquer lup qui donne lup dans laquelle est une matrice triangulaire fortement suffit poser pour obtenir qui donne lup avec obtient algorithme algorithme lup bunch hopcroft pour une matrice surjective une matrice surjective est corps sortie les matrices utilise partition avec dlog lup pas ici avec inversion une matrice triangulaire avec partition avec lup pas ici fin algorithme obtenu est algorithme avec branchements ceci est puisque sortie discontinue algorithme bunch hopcroft les branchements sont tous par test dans corps notons nombre par cet algorithme pour les matrices son temps profondeur prend pas compte les recherche non nuls les produits une matrice par une matrice permutation alors suivant page les suivantes tout abord concernant nombre terme correspond soustraction terme correspond calcul produit dans lequel peut toujours par des lignes pour faire une matrice blocs lui avoir des colonnes effectue alors multiplications dans ensuite concernant temps obtient utilisant inversion des matrices triangulaires section une matrice surjective type sur peut par algorithme avec branchements qui nombre temps par lpm lpm log max notons que pour taille circuit correspondant algorithme bunch hopcroft est exactement par log rapide preuve calcul lup fait nous donnons les majorations pour cas est clair que calcul peut que plus rapide pour temps donc vue pas relation avec est par maple par qui donne pour calculer nombre pose suppose sans perte que puisque qui donne dans algorithme lorsqu traite les matrices type donc ramenant obtient les sachant que obtient par sommation simplification solution une relation majoration suivante log qui donne majoration vaut aussi pour les matrices type avec calcul inverse calcul inverse une matrice lup permet calcul rapide inverse une matrice inversible ramenant ces multiplication rapide des matrices ordre effet passe par lup calcul une matrice effectue avec ordre que multiplication des matrices puisque alors det detu qui revient calculer produit des diagonaux matrice triangulaire est signature permutation par matrice donc lup calcul log par circuit binaire est pour calcul inverse quand elle est inversible puisque qui revient plus lup inverser deux matrices triangulaires effectuer produit matrices priori les algorithmes calcul inverse tels que nous venons les sont des algorithmes avec branchements dans cette perspective recherche des non nuls comme celui des permutations lignes colonnes des multiplications gauche droite par une matrice permutation est pas pris dans les comptes aussi bien point vue leur nombre total que celui leur profondeur peut aussi prendre point vue selon lequel nous avons construit des familles uniformes circuits avec divisions qui calculent des fractions rationnelles formelles les coefficients matrice alors pas lup mais seulement une sans aucun branchement naturellement contrepartie est que algorithme peut pas sur corps avec une matrice arbitraire est seulement pour une matrice que circuit fonctionne une telle matrice est une matrice qui lorsqu lui applique algorithme avec branchements subit tous les tests donnant une dans nos nous adoptons second point vue rapide proposition calcul une matrice ordre sur corps est par une famille uniforme circuits avec divisions les constantes asymptotiques sont respectivement par pour taille par pour profondeur les majorations des constantes que celles proposition inversion une matrice ordre sur corps est par une famille uniforme circuits avec divisions avec estimation que celle proposition pour constante asymptotique profondeur une constante asymptotique par pour taille dans constante proposition terme correspond lup terme inversion deux matrices triangulaires suivie multiplication deux matrices forme lignes dans cette section nous donnons sur une permettant les matrices coefficients dans corps commutatif forme lignes avec une ordre que celle multiplication des matrices une matrice type sur forme lignes consiste transformer ayant exclusivement recours des transformations unimodulaires sur les lignes une matrice type sur avec nombre strictement croissant apparaissant gauche des lignes successives matrice note matrice unimodulaire correspondant ces transformations cela revient multiplier matrice gauche par matrice rappelons page agit une part transformation qui consiste ajouter une ligne une combinaison des autres autre part des lignes type matrice obtenue faisant subir matrice ordre les transformations forme lignes prenons par exemple matrice ordre peut forme lignes effectuant des transformations style pivot gauss sur les lignes ces transformations sur les lignes matrice ordre donnent matrice unimodulaire qui ces transformations matrice lignes est alors par produit comme nous avons fait pour lup agit ici une version rapide pivot gauss sur les lignes mais contrairement lup aucune est faite sur matrice aucune permutation colonnes est permise contrepartie dans qui cette matrice seulement unimodulaire forme lignes trouve justification son application dans des comme des une base pour par elle sera aussi dans section pour calcul rapide sur corps que nous allons exposer est due elle est reprise dans bcs rapide description rapide une matrice pour forme lignes peut supposer sans perte que quitte matrice avec suffisamment lignes colonnes principale que nous noterons fel utilise les auxiliaires suivantes est une qui transforme une matrice dont est triangulaire une matrice triangulaire plus avec gulaire calcule une matrice unimodulaire triangulaire telles une matrice utilisant approche diviser pour gagner divise que matrice huit blocs traitement matrice est applique aux blocs qui que obtient avec des notations suivant avec les lignes les colonnes feront objet aucune manipulation forme lignes posant bien laire sont est une matrice elle prend une matrice retourne une matrice unimodulaire sln une matrice triangulaire encore obtient avec approche diviser pour gagner des notations analogues celles suivant est matrice unimodulaire correspondant algorithme matrice est donc une matrice triangulaire pose alors que les matrices correspondent application respective algorithme matrice qui est type algorithme matrice qui est ordre cela traduit par fait que est triangulaire que posant avec bien est une matrice triangulaire rapide elle prend une matrice triangulaire avec donne sortie une matrice unimodulaire sln une matrice sous forme lignes partition blocs matrice sont des matrices triangulaires est alors par suivant dans lequel est abord algorithme qui est matrice pour donner matrice est une matrice surjective lignes est est matrice pour donner matrice est triangulaire est enfin qui matrice donne matrice lignes avec maintenant pose est rang alors avec qui est bien une matrice lignes puisque sont principale fel elle prend une matrice retourne une matrice unimodulaire sln une matrice sous forme lignes cas est trivial pour applique auxiliaire nombre ses lignes est son rang qui est aussi celui forme lignes pour transformer matrice une matrice triangulaire puis pour transformer une matrice lignes analyse principale fel passe par celle des trois algorithmes auxiliaires par les tailles par les profondeurs respectives ces trois algorithmes les majorations suivantes dans lesquelles les coefficients sont les constantes intervenant dans taille profondeur des multiplication des matrices pour les tailles pour les profondeurs faut remarquer que dans les peuvent qui explique diminution coefficient entre utilisant les fait que nous allons montrer suivant concernant forme lignes proposition forme lignes une matrice ordre sur corps commutatif est par une famille uniforme circuits taille profondeur avec les majorations suivantes nlog rapide preuve les sommations des relations une part des relations autre part pour allant avec donnent les majorations suivantes pour taille profondeur circuit correspondant tenant compte ces relations fait que les sommations pour allant des relatives taille profondeur circuit correspondant nous donnent majoration dans laquelle avec majoration obtient par des calculs analogues les majorations suivantes pour taille profondeur circuit correspondant des majorations fait que remarque fait des matrices dont nombre lignes colonnes est une puissance est pas une restrictive peut effet plonger toute matrice dans une matrice ordre prenant max dlog dlog matrice par lignes colonnes nulles les subissent aucune transformation cours dans proposition reste valable condition remplacer par max les algorithmes sont des versions algorithme frobenius que nous avons section dans section nous que plus simple ces algorithmes nous reprenons les notations section matrice endomorphisme nous appelons base canonique dans cas simple nous examinons ici cas plus simple plus cas est une base nous par matrice vecteurs vecteur endomorphisme dans une base alors est matrice passage sont les coefficients dans relation ceci prouve que est semblable une matrice frobenius que son est algorithme dans cas plus simple consiste calculer matrice puis produit pour obtenir par simple lecture colonne les coefficients prenant dlog calcul fait consiste calculer matrice matrice rapide calculer matrice partir trice pour obtenir matrice fin ces obtient matrice qui admet comme matrice puisque calcule ensuite colonne par inverser matrice passant par lup enfin calcule colonne multipliant par colonne puis calcule analyse dans cas simple nous donne donc proposition peut calculer une matrice ordre coefficients dans corps moyen circuit avec divisions log log taille plus par dlog sont les constantes intervenant dans les multiplication des matrices inversion des matrices voir proposition page cas algorithme fournit une famille uniforme circuits avec divisions qui calcule une matrice sur corps sens des circuits avec divisions autrement dit circuit correctement tant que fraction rationnelle tant corps aij les coefficients aij matrice sont pris comme des mais calculer toute matrice qui pas minimal que est donc dans une situation pire que pour calcul bunch hopcroft car dans dernier cas suffit multiplier droite gauche matrice par des matrices unimodulaires petits coefficients entiers prises hasard pour obtenir une matrice qui une avec une grande ceci son est nul algorithme page algorithme bunch hopcroft sans branchement avec preprocessing que nous venons indiquer que dans cas une matrice dont rang est strictement est donc produisant algorithme avec branchements qui fonctionne dans tous les cas que son tour force pour cela lui fallait abord rapide une matrice forme lignes sur corps dans cette nous avons que profondeur algorithme avec branchements est nlog obtient suivant une matrice ordre sur corps peut par algorithme avec branchements qui pour taille log une version plus rapide pour les cas favorables notons que propose une version plus rapide pour algorithme avec divisions mais sans branchements qui calcule dans les conditions proposition proposition peut calculer une matrice ordre coefficients dans corps moyen circuit avec divisions qui pour taille une version signalons enfin une algorithme obtenue par giesbrecht pour algorithme wiedemann section anneau commutatif arbitraire les divisions contient cette est seulement probabiliste elle fonctionne toujours pratique rapide temps son aspect kaltofen est lui appliquer des divisions strassen page doit pour cela exhiber une matrice couple vecteurs pour lesquels algorithme wiedemann effectue sans divisions tels que minimal suite qui est par algorithme est est autre par que minimal signe kaltofen suite nombres entiers par est impair avec estpair les premiers termes sont applique algorithme aux premiers termes constate que les restes successifs dans algorithme euclide jusqu ont coefficient dominant avec diminuant que une seule chaque pas que pour qui garantit fait que les appartiennent que constate que les multiplicateurs ont coefficient dominant terme constant que par dans obtenue avec est qui signe avec kaltofen montre partir algorithme qui calcule les coefficients que ces derniers sont fait par formule pour est donc ainsi obtenu qui est minimal suite dont les premiers termes avec les premiers termes suite alors matrice par exemple pour matrice compagnon obtient est autre que enfin les deux vecteurs que les suites qui admettent unitaire commun sont telles que pour tout compris entre que pour tout ainsi par construction algorithme wiedemann prenant avec les deux vecteurs effectue avec les seules addition multiplication dans pour donner sortie minimal suite par rapide soit maintenant aij une matrice ordre coefficients dans agit calculer utilisant que les cela fait par des divisions dans algorithme wiedemann pour matrice prenant comme centre des divisions point par matrice les deux vecteurs auxilaires comme les coefficients les sorties algorithme wiedemann sont des les coefficients aij utilise des divisions strassen donc une sur pose applique algorithme wiedemann dans anneau matrice avec les vecteurs auxiliaires par dans les sorties cet algorithme calcule minimal suite comme les seules divisions font par des terme constant ensemble calcul fait uniquement avec des additions multiplications dans algorithme page kaltofen pour calcul une matrice algorithme utilise comme habitude notation page ainsi que notation page donne suivant kaltofen calcul adjointe une matrice ordre sur anneau commutatif arbitraire fait aide une famille uniforme circuits log utilise une multiplication rapide des log log log log selon anneau cela fait donc log log log log pour algorithme kaltofen nous verrons chapitre que les algorithmes profondeur font moins bien dans cas anneau vraiment arbitraire ils utilisent log algorithme algorithme entier une matrice aij sortie pose variables locales vecteur centre des divisions cij matrice centre des divisions calcul centre des divisions initialisation pour faire fin pour pour faire fin pour calcul suite pour faire dans fin pour appliquer suite puis remplacer par dans minimal fin mais peu mieux dans cas anneau les entiers sont non diviseurs dans cours preuve qui suit nous ferons analyse version algorithme kaltofen nous obtenons suivant proposition dans version simple algorithme kaltofen calcul adjointe une matrice ordre sur anneau commutatif arbitraire fait aide une famille uniforme circuits taille plus avec nombre multiplications nombre additions ordre grandeur nombre multiplications essentielles est rapide preuve remarque tout abord que est les entiers elle calcule sont des constantes circuit disponibles une fois pour toutes leur calcul doit pas pris compte ils sont toute calculables quant affectation dans elle signifie point vue des dans effectue soustractions qui peuvent une seule est pour essentiel algorithme euclide elle fait avec circuit profondeur log taille est nombre pour multiplication deux dans profondeur log cela est fait que algorithme euclide comporte avec chacune dans anneau des certaines ces sont des divisions par des inversibles pour obtenir reste plus nombre voyons tout abord version calcule successivement les pour par bvk cela fait tout multiplications additions dans chacune des multiplications est produit une par une des les sont des forme est une constante une des non nulles est une tel produit consomme donc multiplications essentielles multiplications type produit par une constante additions dans version consomme multiplications essentielles multiplications non essentielles additions voyons maintenant version subdivise quatre qui sont les suivantes dans lesquelles pose pour calculer calculer matrice pour calculer pour pour calculer notez que bien que que les entiers parcourent tout intervalle cours des les coefficients sont des dans ils sont que chaque multiplication deux coefficients correspond circuit profondeur log avec base dans cela donne analyse suivante pour les pour obtenir tous les vecteurs pour peut blog chaque blog consiste matrice puis multiplier droite par matrice qui est une matrice pour obtenir matrice qui est une matrice dont les coefficients sont des chacune ces blog correspond donc circuit profondeur log log taille qui donne total pour circuit profondeur taille log est une puissance calcul fait matrice sinon faut faire produit certaines des matrices par exemple pour chaque produit les coefficients des matrices sont dans ceci correspond nouveau circuit profondeur taille log pour suite nous posons nous pouvons plus utiliser technique qui ici donnerait priori une famille uniforme circuits dans log partant vecteur notre algorithme consiste calculer pour allant vecteur posons notons dans sous forme chacun des est vecteur dont les composantes sont des peut donc identifier avec matrice calcul vecteur lignes colonnes fait comme suit calcule matrice dont les sont des rapide puis les sommes correspondantes pour obtenir qui plus additions dans produit est celui une matrice par une matrice toutes les ceci peut faire avec multiplications blocs chaque multiplication blocs fait sur des obtient donc chaque cela donne total pour une famille uniforme circuits dans core dans cette peut multiplication une matrice par une matrice dont position pour est autre que coefficient utilisant nouveau multiplication par blocs nous concluons que correspond circuit profondeur taille peut calcul dans tableau suivant qui donne pour chaque circuit correspondant temps que nous avons taille lorsqu algorithme avec une multiplication des mais sans multiplication rapide des matrices sur les lignes avec etape profondeur taille etape etape log avec etape log total log avec log tableau algorithme dans notre preuve est qui profondeur circuit correspondant algorithme mais peut profondeur par diverses une est pas utiliser algorithme pour calcul minimal une suite une telle dans voir aussi calcul qui forme toeplitz utilisant calcul matrice par verrier par csanky section obtient circuit profondeur taille cette est elle applique uniquement lorsque divise pas dans anneau une qui heurte pas obstacle consiste utiliser une version algorithme euclide voir corollaire page cependant suffit pas profondeur pour obtenir une profondeur polylogarithmique faudrait faire pour plus donc heure actuelle ouvert obtenir circuit taille cet algorithme profondeur polylogarithmique permettant calculer sur anneau commutatif arbitraire algorithme obtient que savoir meilleur temps rapide tous les algorithmes connus pour calcul sur anneau commutatif arbitraire multiplication rapide des matrices bien mais aussi multiplication rapide des pour les multiplication rapide est couramment sur machine ainsi lorsqu dispose pas une multiplication rapide des matrices obtient temps asymptotiquement meilleur que tous les autres algorithmes fonctionnant sur anneau commutatif arbitraire multiplication des que par karatsuba notons que sur anneau commutatif qui pas racines principales qui utilise transformation fourier rapide est log log log elle devient plus performante que karatsuba nlog que pour grand ordre plusieurs milliers section notamment remarque page vaste champ ouvre donc maintenant que multiplications rapides commencent avoir une pratique calcul formel conclusion nous terminons chapitre renvoyant lecteur deux surveys erich katofen gilles villard concernant aussi bien que binaire calcul des nous nous calcul dans cet ouvrage ils montrent quel point est sujet recherche actif calcul formel importance des modulaires pour traitement des concrets leverrier introduction csanky fut premier prouver que les calcul des inversion des matrices des calcul dans cas anneau contenant corps des rationnels sont dans classe dans classe des qui peuvent temps polylogarithmique avec nombre polynomial processeurs par une famille uniforme circuits montre effet que tous ces calcul que dernier calcule particulier ils sont dans classe nous travail csanky dans section dans section suivante nous donnons due preparata sarwate qui montre que calcul peut dans dans section nous donnons une meilleure estimation algorithme due galil pan dans chapitre nous examinerons des algorithmes qui les sur anneau commutatif arbitraire algorithme csanky pour calculer csanky utilise verrier suivante donne entier corps plus anneau dans lequel est inversible une matrice leverrier det xin pose pour verrier consiste cette admet solution unique qui donne les coefficients ceci donne algorithme csanky quatre grandes algorithme algorithme csanky principe entier une matrice anneau contient corps sortie les coefficients calculer les puissances calculer les traces des matrices inverser matrice triangulaire calculer produit fin analyse pour cet algorithme utilise les technique diviser pour gagner notamment son algorithme csanky application calcul inverse une matrice triangulaire que nous avons algorithme calcul des puissances matrice algorithme calcul des par circuit profondeur log taille par page mais dont les internes des circuits multiplication matrices des circuits taille profondeur log qui donne total pour circuit calcule ensuite les traces des matrices les coefficients qui forment matrice triangulaire sont des sommes que calcule pour log calcul fait comme matrice est effet triangulaire fortement proposition calcul matrice fait par circuit enfin calcul qui est produit une matrice triangulaire par vecteur fait par circuit taille profondeur dlog profondeur essentiellement due aux additions fait tout petit peu mieux csanky soit anneau les pour algorithme verrier division par quand elle est possible est unique explicite calcul adjointe inverse une matrice ordre est log preuve une modification algorithme csanky pour une matrice ordre montre que anneau dans lequel est inversible peut par pour algorithme verrier effet soit matrice dans algorithme csanky pour calcul leverrier lieu calculer qui est possible que est inversible dans calcule suffit pour cela calculant produit qui revient calculer les valeurs des variables point permet alors qui calcul celui des puissances calcul fait calcul des par exemple proposition nous laissons lecteur lectrice terminer pour qui concerne les calculs adjointe inverse variante signalons existe une variante verrier due qui donne une famille uniforme circuits avec divisions calculant avec une faible profondeur sur corps finie utilise suivant concernant les sommes newton connu sous nom kakeya proposition soit une partie finie correspondant sommes newton sur corps nulle alors est fondamental sur seulement est stable pour addition dans par exemple pour tout entier positif partie des premiers entiers naturels qui sont pas des multiples satisfait utilise pour adapter verrier calcul sur corps notez implique que les sommes newton les skp prenons maintenant exemple est nous sommes sur corps nous les relations newton qui preparata sarwate donnent les sommes pour page compte tenu des relations matrice est point non trivial est que est pas une fonction identiquement nulle corps base est infini fait dans cas les comme des les comme par les relations les sont cela implique alors que les peuvent exprimer comme fractions rationnelles les avec pour autre point non trivial consiste les type lorsque correspondant est non nul par algorithme avec divisions bien algorithme correspond une famille circuits avec divisions dans voir aussi livre annexe pages preparata sarwate principe anneau les pour algorithme verrier une matrice leverrier par preparata sarwate algorithme csanky provient fait que pour calculer les traces pas besoin calculer toutes les puissances suffit effet pose disposer des matrices qui revient calculer puissances matrices lieu des puissances est fait appel pour cela deux powers superpowers permettant calculer les puissances successives une matrice jusqu ordre les traces des puissances seront alors obtenues les matrices suivante est matrice uniquement des lignes des matrices est matrice des colonnes des autres matrices les matrices sont des matrices ordre dont les coefficients sont autres que les diagonaux des matrices plus ukl qui est position dans matrice qui est obtenu par multiplication ligne matrice par colonne matrice apl est donc diagonale produit apl que ukl pour par position matrice posant prend toutes les valeurs comprises entre quand varient obtient avec les notations pour ukl sont respectivement quotient reste euclidiens par comme les matrices sont disponibles cela nous donne donc les traces toutes les puissances donc celles toutes les matrices puisque algorithme preparata sarwate qui comprend deux parties pour calcul matrice preparata sarwate pour calcul adjointe inverse cette matrice calcul avant donner algorithme page suivante voyons tout abord les dans cet algorithme agit essentiellement superpowers qui est partir powers vue calcul des puissances une matrice dans notre cas est matrice chacune ces deux prend donc entier donne sortie matrice rectangulaire des puissances powers superpowers powers powers pour faire superpowers dlog powers pour faire ask powers cela donne toutes les puissances jusqu ordre pour faire asr pour avoir les puissances restantes algorithme nous utilisons comme habitude les notations page nous allons les famille circuits algorithme preparata sarwate par des utilise les pour algorithme principal par colonne les auxiliaires powers colonne superpowers colonne spw seront tableau suivant respectivement par leverrier algorithme algorithme preparata sarwate entier une matrice sortie vecteur des coefficients les calcul avec calculer les puissances appelant superpowers calculer les puissances faisant superpowers calculer les produits former vecteur matrice triangulaire calculant partir des matrices ukl obtenues les traces ukl prendra pour chaque valeur calculer utilisant approche diviser pour gagner calculer produit taille profondeur largeur spw powers nous donne les relations max pour par sommation dlog dlog log log superpowers dans laquelle dlog permet max calcul des entiers intervient pas fait partie construction circuit correspondant preparata sarwate qui donnent avec les majorations log dlog log log log algorithme utilise plus des une inversion matrice triangulaire nous avons proposition que inversion une matrice triangulaire fortement fait par circuit taille par donc par profondeur plus log log largeur est applique principe brent ceci permet partie rithme principal compte tenu fait que que tableau indique des majorations pour taille profondeur pour chaque algorithme preparata sarwate etapes etape etape etape etape etape etape total taille log log profondeur dlog log dlog log log dlog log dlog tableau suivant preparata sarwate dans lequel nous avons calcul adjointe inverse qui constitue partie cet algorithme soit anneau les pour algorithme verrier adjointe inverse existe une matrice fait par circuit taille profondeur leverrier largeur respectivement par log les constantes asymptotiques multiplication des matrices log calcul adjointe inverse algorithme preparata sarwate calcule pas toutes les puissances matrice par calcul adjointe partir formule doit faire utilisant que les puissances avec plus les coefficients les matrices des lignes des puissances disponibles astuce est les matrices avec les coefficients avec convention rappelons que calcule ensuite somme les calculs sur dlog avec maximum log multiplications matrices des produits type apk par adja puisque une cette somme est part que autre part est compris entre correspond unique couple tel que division euclidienne par qui donne adjointe puis inverse ainsi partie algorithme preparata sarwate pour calcul adjointe inverse peut comme suit les puissances matrice ainsi que les puissances matrice toutes disponibles issue des deux algorithme principal matrice des matrices preparata sarwate enfin matrice partir des coefficients alors cij est facile voir que ligne cette matrice est autre que ligne matrice est matrice des lignes des matrices sortie adjointe inverse les matrices adja adja les calcul faisant suite aux qui calculent elles seront pose dlog matrice ordre nulle calculer produit cln qui revient calculer les produits matrice qui est une matrice par les matrices clk qui sont des matrices cette permet les matrices apk pour faire calculer adj apk calculer adj cette partie algorithme preparata sarwate les bornes que algorithme principal fait log comportant total dans anneau base utilisant maximum log processeurs les sont les plus elles correspondent total comportant multiplications matrices ordre leverrier base quoi faut rajouter des additions matrices cela fait circuit profondeur log log log log log nombre processeurs cours ces est log log log puisque log galil pan galil pan les les plus algorithme quatre multiplications matrices rectangulaires agit plus des algorithme principal calcul une part des calcul adjointe autre part par une des algorithme principal qui font intervenir les powers superpowers remplace appel ces par appel une unique permettant calculer les matrices partir des matrices cela fait effectuant produit une matrice rectangulaire par une matrice rectangulaire qui donne les puissances restantes algorithme principal calcule les produits pour les traces des puissances est possible cette calcul seul produit deux matrices rectangulaires types respectifs effet les chaque matrice pour sur une seule ligne par suite ses lignes par fait avec galil pan les matrices apk mais cette fois chacune elles sur une seule colonne apk sera donc dans ordre ses colonnes par calcul des traces revient alors calculer produit des deux matrices rectangulaires est clair que ligne colonne cette matrice est modifie enfin les calcul adjointe prenant avoir change les dimensions matrice par cij avec les notations convention pour les cij ainsi que les dimensions des matrices les par des matrices exactement mais partir des lignes des matrices qui fait elles sont type lieu type calcule alors matrice effectuant produit une matrice par une matrice tenant compte fait que ligne bloc est autre que ligne matrice ici avec ces modifications les donc comme peut constater calcul produit deux matrices rectangulaires avec les notations que qui est produit une leverrier matrice par une matrice atq posant dernier produit est effet comme que pour matricep ainsi obtenue est exactement adjointe adja les calculs ces quatre produits matrices rectangulaires auxquels galil pan algorithme preparata sarwate qui sont des multiplications respectifs par tableau suivant ordres multiplication multiplication multiplication multiplication multiplication facteur facteur effectuent fait autre part appel aux concernant les notions algorithme rang tensoriel voir section pour algorithme preparata sarwate ainsi faisant passer exposant dans cette taille nombre processeurs prend winograd coppersmith rappelons voir section que rang tensoriel application multiplication des matrices par des matrices coefficients dans note pia cette application est comme rang algorithme tenseur galil pan pia nombre minimum multiplications essentielles calcul correspondant rang est nous omettons indice dans mesure tous les appliquent importe quel anneau outre les dans section coppersmith pour cas des matrices rectangulaires qui nous occupe ici est par galil pan pour existe une constante positive dans premier temps log log puis qui pour tout les modifications des les plus aboutissent des multiplications matrices rectangulaires rangs respectifs multiplication multiplication multiplication multiplication multiplication rang tensoriel aussi alors galil pan calcul adjointe inverse une matrice ordre est log est strictement positif particulier pour taille circuit est suffit effet pour les quatre rangs tensoriels dans tableau utilisant constante multiplication des matrices rectangulaires pour cela pose qui donne les estimations qui entre elles donnent leverrier comme peut prendre qui pour multiplication pour les trois autres multiplications remarque que une multiplication par blocs que par avec prend avec avec pour prenant qui correspond cas concret cela donne bien inf pour importe strictement inf pour fin compte exposant dans asymptotique pour calcul adjointe par preparata sarwate est lieu galil pan pour conclusion les algorithmes csanky preparata sarwate galil pan sont fait que des variantes verrier mais elles ont avoir spectaculaire des circuits permettant ces dans cas anneau commutatif autorisant les divisions exactes par les entiers les estimations ces algorithmes dans cas tels anneaux restent les meilleures connues heure actuelle calcul sur anneau commutatif arbitraire introduction dans chapitre nous des algorithmes bien calcul sur anneau commutatif arbitraire premier cette sorte dans section obtenu estimation son temps est pessimiste mais reste grand dans les sections suivantes nous expliquons les algorithmes chistov berkowitz qui sont dans log notera que est cependant moins bon temps que pour algorithme preparata sarwate qui division par entier arbitraire celui kaltofen qui est pas bien tout programme donc tout circuit sans division sur anneau calcule valiant skyum berkowitz rackoff important suivant preuve est bien dans bur sur anneau arbitraire soit circuit sans division taille qui calcule variables sur anneau alors existe circuit taille profondeur log log qui calcule les composantes outre construction partir est logspace particulier corollaire toute famille qui peut moyen une famille uniforme circuits peut aussi dans classe appliquant algorithme pivot gauss auquel fait subir des divisions strassen que calcule est obtient suivant borodin hopcroft von zur gathen proposition une matrice est par programme taille log profondeur dans construction correspondant est multiplication rapide des avec multiplication usuelle des proposition donne dans anneau base algorithme berkowitz introduction utilisant partitionnement gas samuelson berkowitz exhiber circuit taille profondeur est positif quelconque trouve dans bur majoration log log log pour profondeur terme log est lorsque log mais convention notation que nous avons choisie pour log conforme longueur code binaire nous donne log algorithme berkowitz ainsi asymptotique calcul des adjointes matrices coefficients dans anneau commutatif quelconque nous allons donner une version algorithme berkowitz due eberly qui taille log sans changer profondeur pour cela nous donnons une version plus simple pour calcul des coefficients nous donnons une estimation constante qui intervient dans grand soit aij une matrice ordre sur anneau commutatif arbitraire aux notations introduites dans section pour tout entier par principale dominante ordre notera ici matrice matrice rappelons formule samuelson vue section notons peut aussi formule samuelson sous forme toep est vecteur colonne des coefficients toep est matrice toeplitz suivante partir toep calcul consiste donc sur anneau arbitraire calculer abord les coefficients matrice toep qui interviennent dans qui revient famille lorsque sont respectivement des matrices lorsque est entier tel que calculer ensuite dont vecteur des coefficients compte tenu est par toep toep toep dans son papier original berkowitz que les familles peuvent par circuit pour que calcul fait version nous utilisons comme habitude notation page proposition entier des matrices famille peut par circuit dont taille profondeur sont respectivement par log log preuve soit utilisera pour analyse des algorithmes les entiers log dlog qui les aussi toute matrice ordre sera selon cas soit dans une matrice ordre soit dans une matrice ordre chacun des ici une matrice nulle dimensions convenables algorithme berkowitz faut cependant que dans les deux cas remarquer matrice fait aide circuit taille profondeur log puisque produit une matrice par une matrice type avec peut obtenu par produit une matrice par une matrice cause fait que dans ces deux produits type les colonnes sont les alors que les colonnes restantes sont nulles qui fait que produit question peut obtenu par multiplications blocs additions des blocs produits obtenus par circuit taille profondeur pour matrice dont les lignes sont les famille comme une matrice matrice dont les colonnes sont les famille comme une matrice famille obtient alors calculant matrice puis matrice enfin produit matriciel famille par est matrice wij puisque wij seulement seulement calcul fait donc deux phases une phase calcul des matrices une phase calcul produit phase calcul proche proche partir des puissances obtenues par successives les matrices premier terme crochet provient des multiplications blocs second terme indique nombre additions dues aux additions des blocs sur anneau arbitraire effet pour qui donne plus les relations matricielles suivantes pose avec algorithme suivant pour calcul comportant successives partir des initiales consiste calculer pour cela deux seront sur qui est une matrice multiplier gauche par qui est une matrice fin ces obtient matrice matrice figure etape etape etape etape etape figure calcul les liens trait indiquent les multiplications effectuer cours une pour passer suivante algorithme berkowitz consiste calculer encore agit une matrice multiplier droite par qui est une matrice issue ces nouvelles obtient matrice figure etape etape etape etape figure calcul les liens trait indiquent les multiplications effectuer cours une pour passer suivante utilise les multiplications par blocs ils sont ici nombre blocs resp est par circuit taille profondeur max puisque est tenant compte fait que log algorithme calculant est donc par circuit profondeur log log taille par log donc par log qui est log cette taille est donc par log log sur anneau arbitraire qui est clairement log phase cette phase consiste calculer produit qui peut effectuer par des multiplications blocs grandes consiste calculer produits blocs avec dans anneau base qui donne une profondeur totale agit dans calculer somme des produits obtenus faisant intervenir additions dans anneau base aide une famille circuits binaires profondeur nombre total dans anneau base qui interviennent dans ces deux grandes calcul correspondant une profondeur totale est donc par puisque qui fait aussi avec une constante asymptotique ainsi calcul partir fait par circuit taille profondeur puisque log log log nous dans tableau analyse qui vient faite qui etapes phase phase total profondeur log log log log log taille log log essentielle avec algorithme berkowitz dans simplification permettant calculer proche proche les matrices chaque pas multiplication par une algorithme berkowitz seule matrice lieu matrices avec recours multiplication par blocs permis nombre dans anneau base facteur proposition permis donner une estimation constante asymptotique cette constante est effet elle est que constante asymptotique multiplication des matrices les coefficients une matrice ordre peuvent par circuit dont taille profondeur sont respectivement par log par log preuve matrice aij est autre que par formule toep toep toep calcul des coefficients forme pour fait proposition log plus calcul des matrices toep fait donc avec une profondeur par log log une taille par log par log cause fait log log autre part produit peut aide circuit binaire avec base une profondeur par sur anneau arbitraire cela donne fin compte dans anneau base nombre total par log log avec circuit profondeur par log log proposition les coefficients des toutes les principales dominantes une matrice ordre peuvent log avec les estimations que celles pour les constantes asymptotiques preuve effet les coefficients principale dominante sont par les vecteurs toep toep toep ces vecteurs sont autres que les troncatures successives pour allant second membre ils peuvent donc par algorithme des circuit que nous avons figure correspond une des solutions calcul des que nous avons dans section agit circuit profondeur dlog taille par comme agit multiplications matricielles chaque interne circuit par une croix dans figure correspond circuit multiplication matrices profondeur log avec dans anneau base calcul des partir des matrices toep fait donc par circuit taille profondeur par log log conclut que pour produit des matrices toeplitz corollaire adjointe une matrice ordre calculent log avec les bornes que celles pour les constantes asymptotiques preuve est autre que autre part matrice adjointe est par formule adj algorithme berkowitz toep toep toep toep toep toep figure calcul des pour ici preparata sarwate voir section donnent algorithme powers pour calculer les puissances avec circuit profondeur log log taille par est alors obtenu remarquant que adj calcule partir des puissances dlog avec base remarque baur strassen pour calcul des partielles section montre que calcul adjointe une matrice toujours voisin celui son construction originale pas profondeur mais par kaltofen singer tout circuit taille profondeur calculant une fonction polynomiale sur anneau une fonction rationnelle sur corps donne circuit taille profondeur qui calcule fonction toutes parallel prefix algorithm section donne pour profondeur mais une taille par sur anneau arbitraire ses partielles ceci nombre variables circuit remarque les que nous venons citent que deux taille profondeur des circuits mais une analyse minutieuse des algorithmes nous permet avoir nombre processeurs par ces algorithmes dans pram largeur circuit correspondant peut fonction largeur circuit profondeur log taille qui calcule produit deux matrices ordre est facile que est que celui obtenu par application directe principe brent cet algorithme nombre processeurs ordre log remarque concernant les questions construction des circuits ainsi que taille des coefficients travail matera turull torres donne dans cas anneau des entiers relatifs une construction effective avec une taille bien des coefficients des circuits base qui interviennent dans algorithme berkowitz traduisant les addition multiplication par des circuits profondeur log est taille maximum binaire des coefficients matrice ils obtiennent pour multiplication deux matrices sur circuit taille profondeur log pour taille des coefficients une majoration ordre log pour algorithme berkowitz une famille uniforme circuits profondeur log log taille cette construction algorithme que nous avons donne une famille uniforme circuits profondeur avec majoration pour taille des coefficients mais taille facteur ainsi provient essentiellement des correspondant aux figures page page notre algorithme chistov chistov introduction une matrice pose est principale dominante ordre det algorithme est sur les formules suivantes sont les section valables dans anneau des ordre det notant colonne mod rappelons alors principe algorithme section algorithme chistov principe matrice sortie calculer pour produits qui donne les formule calculer produit des modulo qui donne mod formule inverser modulo obtient prendre ordre obtient multipliant par fin sur anneau arbitraire version pour chacune des cet algorithme taille profondeur circuit correspondant qui tire meilleur parti multiplication rapide des matrices permet obtenir temps chacun des agit calculer est obtenu prenant composante doit donc calculer pour tous compris entre pour chaque les produits matrice vecteur pour calcul entier tel que dlog ramenons matrice une matrice remplissant les ainsi toutes nos matrices seront comme des matrices comme une matrice cela change pas les produits autre part entier blog alors successives chacune utilisant matrice puis multiplie droite par matrice pour obtenir matrice matrice fin ces faisant obtient matrice dont les ligne plus les premiers sont autres que les pour chaque ainsi chacune elles comportant une matrice fait une matrice multiplication une matrice par une matrice utilisant pour cette les multiplications par blocs quitte plonger matrice dans chistov une matrice matrice dans une matrice obtient pour chacune des nombre par cela est fait que log qui permet obtenir les majorations suivantes log majoration nombre intervenant pour chaque valeur dans calcul des produits log log comme varie calcul effectue aide circuit taille log profondeur plus taille est par log log profondeur par log log peut remarquer avec multiplication usuelle des matrices correspond circuit profondeur taille log log puisque doit calculer produit ordre des calcul fait aide circuit binaire sur anneau arbitraire agit inverser modulo obtenu est forme est inverser modulo revient calculer produit mod cela effectue log log log log aide circuit binaire peut calcul des deux utilisant une multiplication rapide des mais cela pas sensiblement final cette profondeur intervient pas dans algorithme nous donnons tableau analyse qui vient faite pour algorithme chistov montrant que dernier est log utilise multiplication rapide des matrices avec une estimation des constantes asymptotiques pour taille pour profondeur etape etape profondeur log taille log etape log etape log log log log etape tableau version algorithme chistov utilise multiplication usuelle cela donne algorithme log dans dernier cas algorithme section est donc sur une machine applications des algorithmes algorithme chistov calcule les coefficients une matrice ordre par circuit profondeur taille log avec des constantes asymptotiques respectivement pour profondeur pour taille enfin comme dans cas algorithme berkowitz obtient facilement suivant proposition les coefficients des toutes les principales dominantes une matrice ordre peuvent log par algorithme directement celui correspondant remarque remarquons que dans estimation taille des circuits construits partir des algorithmes chistov berkowitz les termes log sont les pour les deux algorithmes alors que les termes sont respectivement pour chistov seulement pour berkowitz rapport premier coefficient second strictement applications des algorithmes des anneaux commutatifs application dynamique calcul des des toutes les principales une matrice trouve une application dynamique lorsqu travaille dans dynamique corps trouve dans situation standard suivante des variables qui des sur sait que ces triangulaire sorte que corps est quotient une dimension finie sur anneau arbitraire chaque est unitaire cela donne structure explicite cette peut contenir des diviseurs qui signifie que plusieurs situations sont par seul calcul dans lorsqu pose question programme doit calculer les coefficients par rapport variable une discussion cas par cas ensuit une solution est calculer ces coefficients dans utilisant algorithme des qui des divisions exactes situe naturellement dans cadre anneau puis les modulo trois calcul lourd une montre bien meilleur taille des objets fait tous les calculs dans malheureusement algorithme des peut plus appliquer effet des divisions requises par algorithme peuvent impossibles est corps division peut demander effort par rapport aux multiplications aussi que algorithme berkowitz celui chistov matrice sylvester des offre meilleure solution art actuel pour calculer ces coefficients faut noter cet que algorithme etc nulle celui par kaltofen section arbitraire ont des performances algorithme berkowitz que pour calcul mais non pour calcul tous les mineurs principaux dominants une matrice signalons aussi que dans cas utilise dynamique pour corps certaines discussions cas par cas font appel aux signes tous les coefficients une autre application algorithme calcul dynamique est signature une forme quadratique par une matrice arbitraire dans cas seule connaissance des signes des mineurs principaux dominants matrice suffit pas toujours pour certifier rang applications des algorithmes signature pourra consulter sujet livre gan mais est pas difficile voir que connaissance des signes des coefficients matrice permet calculer certifier rang signature forme quadratique qui lui est cas des matrices creuses signalons pour terminer que algorithme berkowitz celui chistov sont bien cas des matrices creuses notamment version nombre passe lorsque seulement coefficients matrice sont non nuls parmi les autres algorithmes celui peut cas des matrices creuses avec une diminution similaire nombre cela suffit dans cas une matrice fortement tableaux des dans cette section nous donnons les tableaux des pour les algorithmes figure notamment tableau des des algorithmes version utilisant que multiplication usuelle des matrices des des entiers que nous avons mot cte signifie constante asymptotique pour les estimations taille des circuits val signifie domaine signifie anneau commutatif arbitraire signifie anneau algorithme pour les divisions exactes signifie anneau clos algorithme pour les divisions exactes signifie division par quand elle est possible est unique explicite prob signifie algorithme nature probabiliste agit algorithme wiedemann qui fonctionne sur les corps avec des variantes possibles dans cas les sigles respectivement multiplication rapide multiplication usuelle des rappelons que nous notons nombre dans multiplication deux profondeur log avec suba nlog log log log log selon les anneaux les initiales les algorithmes gauss sur corps sur anneau algorithme pour les divisions exactes pour calcul des rappelons que algorithme qui consomme peu plus des avantages significatifs par rapport algorithme pivot gauss dans nombreux anneaux commutatifs comme par exemple les anneaux coefficients entiers calcul des simples algorithme taille cte val date gauss corps gauss avec des divisions rapides profondeur bunch hopcroft corps voir proposition dans premier tableau nous avons colonne pour traitement des matrices creuses une matrice environ coefficients non nuls certains algorithmes sont leur temps par nous avons cette par oui dans colonne tableaux des calcul versions simples algorithme taille cte val wiedemann prob oui hessenberg corps frobenius berkowitz oui chistov oui verrier oui interpolation lagrange corps kaltofenwiedemann gauss avec des divisions preparata sarwate oui taille avec multiplication rapide des log gauss avec des divisions calcul rapides taille cte val log corps interpolation lagrange corps algorithme kaltofenwiedemann calcul profondeur algorithme taille cte val csanky preparata sarwate galil pan berkowitz chistov log berkowitz log borodin hopcroft gathen colonne donne constante asymptotique temps nombre enfin est positif arbitrairement petit des tests des tests les algorithmes dans les tableaux comparaison que nous ont aide logiciel calcul formel maple dans langage programmation qui lui est les algorithmes sont ceux tableau les versions simples pour calcul nous avons dans colonne linalpoly les performances algorithme par maple dans version les versions plus logiciel utilisent algorithme berkowitz chacun des tests comparaison entre les algorithmes sur une machine avec matrices les matrices font partie des groupes suivants selon type anneau base choisi groupe les matrices randmatrix qui sont des matrices ordre coefficients pris hasard entre dans anneau des entiers relatifs groupe les matrices athard dont les sont des total les coefficients ces sont aussi des entiers compris entre groupe les matrices atmod lisvar ideal qui sont des matrices ordre dont les coefficients sont des choisis hasard dans lisvar est entier positif prendra premier lisvar une liste variables ideal une liste lisvar coefficients dans anneau base est donc ici sauf exception anneau dans lequel division est pas permise groupe les matrices jou ordre coefficients dans dont les coefficients sont par jou pour quelle que soit valeur rang matrice jou pas est qui explique dans cas des algorithmes les programmes ont avec version maple release nettement plus performants pour les matrices rang petit groupe sont des matrices creuses coefficients entiers choisis hasard entre elles sont par maple randmatrix sparse quant aux machines agit essentiellement dec mhz centrale les matrices intervenant dans les comparaisons sont par des codes maple une matmod par exemple une matrice groupe partir deux entiers positifs taille matrice calcule modulo une liste variables lisvar une liste ideal lisvar comprenant autant que variables chacun des unitaire ceci afin illustrer genre application algorithme berkowitz lorsqu place dans lisvar situation dans section matmod utilise comme polmod dans annexe qui prend nombre entier lisvar donne sortie simple image canonique dans lisvar tableaux comparaison nous donnons dans les trois pages qui suivent les tableaux correspondant aux cinq groupes matrices que nous avons agit que quelques exemples mais ils sont significatifs comparaison entre comportement pratique des algorithmes montre bon accord avec les calculs surtout prend compte taille des objets par les algorithmes sauf exception algorithme berkowitz est plus performant suivi par celui chistov les performances priori meilleures pour les algorithmes hessenberg frobenius wiedemann avec des tests portant sur des matrices coefficients dans des corps finis effet avantage nombre est par plus notamment laboratoire gage ecole polytechnique tableaux comparaison vaise taille des objets par exemple que anneau des coefficients contient aurait fallu autre groupe matrices pour mettre cet avantage serait version simple algorithme preparata sarwate dans groupe nous avons pris des matrices ordre coefficients dans pour valeurs comprises entre dans groupe sont des matrices ordre coefficients dans pour des matrices coefficients dans pour parmi les matrices groupe nous avons matrices ordre coefficients dans pour lesquelles faddeev applique pas des matrices coefficients dans pour lesquelles faddeev applique ici est par les deux dans groupe sont des matrices ordre coefficients dans mais rang petit enfin les matrices groupe des matrices creuses coefficients entiers choisis hasard entre ont prises parmi les matrices randmatrix sparse telles que linalpoly berkosam chistov faddeev barmodif maple linalg charpoly correspondant signifie temps cpu time plus moins long calcul message erreur memory lorsque mem seuil mbytes matrice cpu time mem cpu time mem cpu time mem cpu time mem cpu time mem cpu time mem cpu time mem cpu time mem matrices coefficients entiers premier groupe matrices denses hessenberg cpu time mem cpu time mem cpu time mem berkosam linalpoly matrice cpu time mem cpu time mem cpu time mem cpu time mem groupe mathard chistov barmodif faddeev tableaux comparaison linalpoly berkomod chistov linalpoly berkomod chistov signifie que faddeev est pas applicable dans cas puisque matrice cpu time mem cpu time mem cpu time mem pour lisvar ideal matrice cpu time mem cpu time mem cpu time mem pour lisvar ideal groupe matmod lisvar ideal barmodif barmodif faddeev faddeev linalpoly berkosam chistov faddeev barmodif jou est une matrice dont les coefficients jij sont par formule jij pour son rang est pour tout pour tout entier positif remarque des algorithmes faddeev dans cas exceptionnel matrice cpu time mem cpu time mem cpu time mem cpu time mem matrices rang petit groupe jou tableaux comparaison matrice cpu time mem cpu time mem cpu time mem cpu time mem cpu time mem linalpoly heures calcul matrices coefficients entiers berkosam creux groupe matrices creuses chistov creux calcul calcul les expressions introduction chapitre suivant donnent quelques sur travail valiant notamment dans lequel analogue conjecture notre doit beaucoup survey von zur gathen livre bur une autre classique est livre clausen shokrollahi bcs dans section nous discutons codages possibles pour sur anneau section est pour essentiel brent pour des expressions dans section nous montrons pourquoi plupart des sont difficiles calculer enfin section expose valiant sur universel expressions circuits descriptions nous nous dans cette section approches concernant codage arbitraire sur anneau commutatif codage des est une coder est donner son total les noms ses variables liste ses coefficients dans ordre convenu est que nous avons dense des est raisonnable penser que pour immense des rien mieux faire nous donnerons dans cette direction voir les expressions certains ont relativement peu coefficients non nuls peut choisir pour leur codage une creuse dans laquelle donne liste des couples coefficient non nul effectivement dans chaque par liste des exposants chaque variable binaire par exemple sera par des codes pour taille une creuse dense est longueur mot qui code taille peut point vue purement auquel cas chaque constante chaque variable conventionnellement longueur point faible creuse est que produit petit nombre creux est dense comme montre exemple classique suivant autre codage naturel est utilisation des expressions une expression est mot bien qui utilise comme base les les symboles variables une part les symboles autre part enfin les ouvrante fermante point vue peu plus abstrait une expression est vue comme arbre aux feuilles arbre des les constantes des symboles variables chaque est par outre deux branches partent exactement chaque racine arbre expression taille une expression peut point vue purement prend alors nombre dans arbre sans compter les feuilles taille est alors nombre feuilles moins adopte point vue proprement informatique faut prendre compte pour taille longueur explicite expression dans langage les constantes les variables ont des codes travaille avec nombre fini constantes taille expression peut comme simplement proportionnelle taille ceci parce que ensemble des variables est pas priori expressions circuits descriptions figure arbre expression horner dense peut naturellement vue comme une par expressions dans laquelle seules sont des canoniques nombre coefficients variables est par expression permet exprimer certains une petite mais sont les les plus sous forme plus compacte plus efficace qui concerne leur donnons trois exemples premier est celui horner une variable dans les deux expression dense pour son multiplications expression horner dans second membre seulement obtient respectivement respectivement pour expression expression horner exemple est celui produit expression cidessous qui est taille comme une somme une taille ordre creuse plus grande encore dense les expressions exemple sur lequel nous reviendrons plus est celui une matrice dont les sont variables sait pas cette famille peut non par une famille expressions taille polynomiale dont taille serait par log avec conjecture que est faux par contre nous verrons que peut par une expression taille par log avec est clair dense comme creuse une taille pour donc asympk totiquement beaucoup plus grande que log notez par contre que famille exemple occupe une taille exponentielle par expressions arithn cause peut pas plus grand que taille une expression qui exprime codage naturel est celui que nous avons retenu pour ensemble cet ouvrage codage par les programmes qui revient par les circuits une expression peut vue comme cas particulier circuit taille tant expression est que celle circuit qui lui correspond est nombre lors circuit creuse peut efficacement par circuit pour circuit les pertinents sont fois taille profondeur par circuit profondeur par est par les familles qui peuvent par des familles circuits dont profondeur est log deg semble cependant improbable que comme variables qui est dans classe puisse dans une classe log pour entier convention dans les chapitres les circuits les expressions que nous seront toujours sans division sans soustraction rappelons que des divisions strassen montre agit pas une restriction importante surtout dans cas des corps voir soustraction quant elle est deux par expressions circuits descriptions dernier codage naturel que nous envisagerons est celui dans lequel est obtenu sous forme par circuit une expression ceci peut sembler priori artificiel mais nous verrons dans chapitre que cette des est rapport assez avec conjecture nous donnons maintenant quelques qui discussion soit une famille par coefficients dans anneau commutatif notons nombre variables nous disons que famille est sont par dit encore agit une nous disons une famille expressions est taille est par nous disons que famille est elle est par une famille expressions particulier est nous disons une famille circuits est taille taille est par les tous les noeuds sont par famille est taille nous disons que famille est encore elle est par une famille circuits particulier est nous disons que famille est est une pfamille par une famille expressions dont taille est par log avec nous disons que famille est encore qpcalculable est une par une famille circuits dont taille est les expressions nous disons les variables est une description les variables nous disons que famille est existe une famille telle que chaque est une description nous disons que famille est expressions existe une famille telle que chaque est une description faut souligner que toutes les notions introduites ici sont non uniformes demande pas que les familles expressions circuits soient des familles uniformes section nous utiliserons les notations suivantes pour les classes familles correspondant aux est mis pour valiant qui plupart des concepts des des chapitres notation classe des familles est vpe celle des familles vqpe classe des familles est celle des familles vqp classe des familles est celle des familles expressions classe des familles par des familles circuits profondeur logk est des est ces classes sont relativement anneau commutatif besoin anneau notera vpe etc plupart des sont cependant anneau les conjectures sont pour des corps remarque vue proposition existe une famille taille circuits qui calcule une alors existe aussi une famille circuits qui calcule famille pour raison nous aurions demander pour classe vqp que famille des expressions des circuits circuits soit non seulement taille mais aussi des expressions des circuits des expressions des expressions comme celles horner qui sont optimales quant leur taille pour temps criant brent que importe quelle expression peut par circuit par une expression dont profondeur est logarithmique taille expression initiale brent pour tout profondeur meilleur circuit taille meilleure expression sont par log log log log calcul plus dans bcs donne log est nombre preuve dans cette preuve nous notons nombre feuilles arbre correspondant expression est taille expression profondeur circuit une expression est facile est une variable une constante profondeur taille sont nulles sinon lorsque est par circuit suppose avoir avec des expressions obtient max fait calcul correspond une qui circuit une expression profondeur arbre une expression profondeur plus feuilles est nettement plus subtile est suivante appelons les variables expression qui nous voyons cette expression comme arbre arbre une voir figure page suivante les expressions cette par une nouvelle variable une feuille obtient une expression arbre qui avec les correspondent des arbres est facile construire partir arbre figure une expression brent effet voir figure page suivante pour substitue dans simplifie pour part racine suit chemin jusqu supprime les avec eux branche qui pas peut alors construire une expression dans laquelle met abord les expressions termine calculant profondeur cette expression est par max pour que cela soit efficace faut bien choisir que les tailles des trois expressions aient dans une proportion suffisante que chacune des expressions est ensuite soumise nouveau traitement ainsi suite cela sans dire choix fait comme suit soit fait rien sinon part racine arbre choisit chaque branche plus lourde donc fois que aura fait pas trop lorsqu apercevra que des expressions des circuits figure une expression brent seuil franchi faudra retourner cran figure une expression brent donc que sont tous deux voulue est donc par elle fonctionne pour les expressions taille notez que les sont uniformes remarque dans transforme circuit une expression profondeur mais taille beaucoup plus grande seconde transforme toute expression les expressions mal taille une expression bien dont taille pas trop dont profondeur est devenue logarithmique autrement dit partie difficile brent fonctionne niveau des expressions figure arbre horner brent brent corollaire suivant version uniforme serait valable corollaire vpe conjecture que par contre est pas par une expression taille polynomiale donc que vpe partie facile brent algorithme berkovitz montrent fait est par une expression taille log des circuits rappelons maintenant section valiant voir aussi est quelque sorte analogue pour les circuits brent pour les expressions donne une pour importe quel circuit condition calcule raisonnable donne log fait lors une cela conduit plus log log plupart des sont difficiles circuit purement taille qui calcule peut pas mais est cause son trop soit circuit sans division taille qui calcule variables sur anneau alors existe circuit taille profondeur log log qui calcule implique que avec partie facile brent implique aussi une famille circuits peut une famille circuits profondeur polylogarithmique donc une famille expressions taille bref corollaire vqp vqpe remarque ainsi vqp est classe des familles par des circuits dont nombre variables les sont profondeur est polylogarithmique qui signifie pas pour autant ils soient dans pour vqpe cela brent conjecture contrario que les inclusions sont strictes plupart des sont difficiles pour sous forme qui est dans titre cette section nous avons besoin dont signification est intuitivement vous objet dans espace dimension donnant les comme fonctions polynomiales deux objet que vous obtenez est une surface exceptionnellement une courbe plus exceptionnellement encore point mais jamais objet ainsi remplira espace plus partir des trois qui objet est possible calculer trois variables non identiquement nul tel que soit identiquement nul autrement dit tous les points sont sur surface donc corps base est infini plupart des points sont dehors les expressions point dehors puisque corps est infini existe alors toute droite passant par coupe nombre fini points par corps base est celui des celui des complexes que est ouvert dense qui donne encore une signification intuitive plus claire terme plupart dans phrase peut montrer existence comme suit supposons les par pour les leur avec sont nombre est par donc ils sont dans espace des variables qui est dimension pour assez grand une relation non triviale entre les qui donne nous maintenant qui peut proposition soit corps une famille variables avec alors existe non identiquement nul tel que est identiquement nul termes plus image espace dans espace avec par une application polynomiale est toujours contenue dans une hypersurface proposition signification intuitive que moins lorsque nous notre dans lequel expresssion plupart doit comprise sens discussion qui proposition soit corps infini des entiers ensemble des variables est espace vectoriel sur dimension soit une constante arbitraire notons famille tous les circuits qui des avec plus constantes aux portes dont taille est par alors plupart des sont pas par circuit dans particulier pour plupart des taille meilleur circuit admet minoration universel preuve chaque circuit dans peut comme calculant dans lequel les sont les constantes circuit variables correspondant fournit lorsqu fait varier les constantes une application vers chaque cette application est une fonction polynomiale fait majorer taille circuit par implique que les correspondants variables sont nombre fini finalement les par circuit dans sont contenus dans une finie hypersurfaces proposition qui est encore une hypersurface trouvera dans des plus sur sujet remarque notez contrario que circuit qui exprime directement comme somme ses constantes peut avec une taille utilise tous les qui sont nombre produit peuvent effet est multipliant produit par une variable reste ensuite multiplier chaque produit par une constante convenable puis faire addition universel cette section est montrer que toute expression peut vue comme cas particulier expression dans laquelle les matrice ont simplement par une des constantes une des variables expression avec outre fait que nombre lignes matrice est ordre grandeur que taille expression ceci est pas surprenant exemple classique matrice compagnon dans lequel nous avons pas les nulles det les expressions projections nous introduisons maintenant formellement une notion projection pour processus substitution auquel nous allons avoir recours dans suite soit anneau commutatif soient soient aussi des coefficients dans dit que est une projection est obtenu partir substituant chaque dit que famille est une famille existe une fonction polynomialement telle que pour chaque est une projection dit que famille est une famille existe une fonction telle que pour chaque est une projection proposition suivante est facile proposition deux projections est une projection chose pour les pour les les classes vpe sont stables par classe vqp vqpe est stable par une expression comme dans valiant qui suit est produire une matrice ayant pour somme des deux autres matrices est faire cette construction non pas pour importe quelles matrices mais respectant certain format est objet lem crucial qui suit format des matrices qui interviennent dans lem est sur exemple avec universel est vecteur ligne vecteur colonne est unitriangulaire notez que lorsqu une telle matrice sous forme cij cij facteur est nul colonne ligne coupent dans partie strictement matrice puisque cofacteur correspondant est nul lem soient pour deux entiers deux matrices adi unitriangulaires soient deux vecteurs lignes deux vecteurs colonnes adi les trois matrices suivantes alors det det det preuve nous donnons seulement directrice cette preuve peu technique lorsqu comme avant lem facteur produit est nul car ligne colonne correspondante dans commentaire tie strictement matrice juste avant lem pour voir que facteur produit est nul suffit matrice son est identique tant expression celui argument applique reste dans complet somme produits les produits contenant facteur ceux contenant facteur examen attentif montre que les seuls produits non nuls type sont ceux qui empruntent diagonale donc retrouve exactement les facteurs dans det signe signe correspond une permutation circulaire des colonnes les expressions toute expression taille est projection une matrice ordre matrice est dans format lem ses sont soit une constante expression soit une variable expression soit colonne resp ligne contient que une colonne quelconque contient plus une variable une constante expression corollaire toute famille est une famille detn est une matrice ordre donc variables vqp vqpe avec classe des familles qui sont des famille preuve construit matrice suivant arbre expression pour une feuille arbre constante variable prend matrice qui bien aux supposons construit les matrices qui ont pour les voyons abord matrice pour quitte changer colonne peut aussi avoir les peut donc dans tous les cas appliquer lem donnons enfin matrice pour avec cette matrice aux voulues comme elle est triangulaire par blocs son est donnons par exemple matrice construite comme dans preuve pour obtenir nous avons universel pas mis les conclusion dans corollaire que toute famille est une que toute famille est une cette sous forme suivante qui ressemble famille detn est universelle pour vqp les cependant est probablement pas est donc mieux que pose donc question trouver une famille qui soit universelle dans vpe par rapport aux celle trouver une famille qui soit universelle dans par rapport aux serait candidat naturel mais pour moment pas son sujet premier ces deux positivement par fich von zur gathen rackoff dans par dans question admet une assez facile une fois connu brent effet toute expression peut obtenue comme projection une expression profondeur comparable qui combine additions multiplications par exemple expression profondeur donne par projection choix une des deux expressions profondeur maintenant remplace chacun des par expression obtient une expression variables profondeur voit que toute expression profondeur est une projection processus toute expression profondeur est une projection expression qui est profondeur donc brent une famille dans vpe famille est clairement une famille enfin famille est dans vpe car est taille les expressions figure une famille expressions dans vpe peut des scrupules pour tout entier posant question trouver une famille qui soit universelle dans par rapport aux admet une positive style mais nettement plus permanent conjecture introduction chapitre est conjecture valiant nous que les les plus simples nous souhaitons faire sentir importance des enjeux dans section nous faisons une rapide des classes qui constituent une variante non uniforme binaire dans section nous mettons quelques liens simples entre fonctions entre dans section nous faisons lien entre binaire dans section nous donnons quelques sur permanent dans section finale nous rappelons conjecture valiant discutons parmi les utiles pour chapitre faut citer livre weg article non encore familles expressions circuits expressions circuits descriptions analogue anneau est boole permanent conjecture avec dans les cette boole librement par comme quotient anneau variables sur corps laisse penser que les sont priori pertinentes pour les analogue une fonction variables est une fonction nous aurons aussi des applications boole est isomorphe des fonctions isomorphisme fait correspondre fonction rappelons que sont les variables dans une expression appelle une des expressions une expression est dite forme normale conjonctive resp forme normale disjonctive elle est une conjonction disjonctions resp une disjonction conjonctions plusieurs types canoniques pour une fonction forme normale conjonctive forme normale disjonctive sous forme creuse chaque variable intervenant avec dans chaque peut aussi exprimer une fonction moyen une expression circuit convention nous adopterons convention une expression circuit utilisent que les connecteurs outre dans cas une expression usage connecteur sera seulement implicite utilisera les comme variables aux feuilles arbre nulle part ailleurs connecteur taille profondeur une expression prendront compte que les connecteurs les sont tous comme profondeur nulle cette convention pas importante qui concerne les circuits car autoriser autres connecteurs ferait diminuer taille profondeur que facteur constant par contre qui concerne les expressions agit une restriction significative leur pouvoir expression par exemple admet plus familles expressions circuits def connecteur expression probablement une nettement plus longue sans utilisation classes nous sommes ici par les analogues des classes vpe section soit une famille fonctions par nous disons que famille est est par dit encore agit une fonctions nous disons une famille expressions est taille est par nous disons que famille est elle est par une famille expressions classe des familles fonctions est bpe nous disons une famille circuits est taille est par nous disons que famille est encore elle est par une famille circuits classe des familles fonctions est nous notons classe des familles fonctions par une famille circuits taille polynomiale profondeur logk des nous disons une fonction les variables est une description fonction les variables nous disons que famille est existe une famille fonctions telle que chaque est une description classe des familles fonctions est permanent conjecture nous disons que famille est expressions existe une famille fonctions telle que chaque est une description classe des familles fonctions expressions est faut souligner que toutes les notions introduites ici sont non uniformes comme dans cas classe est clairement analogue classe est aussi analogue non uniforme classe dernier point sera plus clair page nous verrons que classe est analogue non uniforme classe compare les des descriptions dans cas dans cas voit utilise maintenant une disjonction place une somme formules notation voir par exemple bdg weg explique comme suit une famille dans peut temps polynomial droit une aide sous forme une famille circuits qui calculent les fonctions qui est pas uniforme mais qui est taille polynomiale signalons que karp lipton qui introduisent classe dans donnent une pour une variante non uniforme une classe binaire arbitraire leur justifie aussi les enfin karp lipton semble rien donner pour par absence classe binaire des dans livre weg wegener contient une des familles fonctions trouve notamment les dans qui suit concernant des dans fait les sont uniformes ils chapitre weg addition multiplication dans sont par des familles circuits dans plus addition deux entiers taille est par circuit taille profondeur dlog familles expressions circuits produit deux entiers taille est karatsuba par circuit taille nlog profondeur log suivant strassen qui adaptent transformation fourier rapide des cas des entiers par circuit taille log log log profondeur log concernant multiplication des entiers lira aussi avec knuth dans knu des expressions nous avons pour les expressions analogue brent des expressions voir sav pour toute fonction profondeur meilleur circuit taille meilleure expression sont par log log log preuve cela marche que brent des expressions dans cas preuve page doit par une expression dans cas analogue corollaire vpe est corollaire bpe description des circuits par des expressions lem suivant est facile utile lem circuit taille avec les portes les portes internes une seule porte sortie donc peut construire une expression forme normale conjonctive taille profondeur log telle que pour tous ait permanent conjecture outre dans second membre une seule affectation des qui rend expression vraie lorsque preuve remplace chaque affectation programme par circuit par une expression qui est vraie seulement valeur est correcte conjonction toutes ces expressions donne expression une affectation est traduite par une affectation est traduite par une affectation est traduite par proposition inclusion signalons aussi important valiant pour une preuve voir bur pour tout corps expressions circuits descriptions cas des applications nous pouvons reprendre avec les familles applications les cette section pour les familles fonctions notre objectif est surtout ici analogue non uniforme classe soit une famille applications par soit famille double fonctions qui donne nous disons que famille est sont par dit encore agit une applications nous disons que famille est elle est famille double correspondante est analogue pour une famille expressions familles expressions circuits nous disons une famille est dans classe encore elle compte les solutions une famille fonctions elle est une famille fonctions famille est dira que famille est dans classe bpe dira que famille est dans resp dans bpe lorsque famille est dans resp dans bpe une description des circuits par les expressions lem est proposition suivante analogue proposition proposition bpe remarque pas principe entre une famille applications une famille fonctions puisque donner une famille applications revient donner une famille double fonctions veut directement classe comme une classe fonctions pourra dire que dans famille est suivant portant sur couple est binaire plupart des fonctions sont difficiles aussi analogue suivant page ici trouve une famille circuits taille peut calculer qune infime partie toutes les fonctions proposition soit vqpb ensemble des familles fonctions variables par une famille cirk cuits taille log soit pour assez grand seulement une proportion fonctions variables est dans vqpb permanent conjecture preuve faisons les comptes nombre total fonctions variables est nombre total circuits variables taille est par effet programme taille est obtenu rajoutant une instruction programme taille instruction forme avec pour sont choisir parmi les parmi les cette majoration conduit log donc log log qui devient devant pour grand trouvera des style mais nettement plus dans chapitre livre wegener weg versus non uniforme des circuits rappelons ici section circuit sur anneau dont les sont binaire anneau est fini temps calcul correspondant circuit est simplement proportionnel profondeur taille circuit par ailleurs rappelons que vpe bpe obtient donc lem une sur anneau fini est dans classe resp vpe son est par une famille dans classe resp bpe dans cas anneau infini circuit peut quelques mauvaises surprises voir exemple inventeur jeu page faudrait bannir toute constante circuit sur veut que avec codage naturel binaire produise pas explosion est anneau infini plus simple une solution serait coder les anneau par des circuits ayant que des constantes mais test point vue des calculs temps polynomial peut remarquer que codage binaire usuel est codage par des expressions versus test signe bien autres simples sur semblent alors sortir classe une autre solution serait apporter une restriction plus aux familles circuits avant introduire moindre constante famille devrait taille ensuite seulement remplacerait certaines variables par des constantes faut avoir une majoration convenable taille des objets calculer une famille fonctions zvn est dite taille est par taille xvn est par taille xvn utilisant les codages binaires usuels alors extension importante suivante lem anneau sous une condition restrictive qui est ailleurs lem une sur suppose que famille fonctions zvn par est taille alors est dans resp vpe son est par une famille circuits dans resp preuve supposons que est dans classe soit une famille circuits correspondant pour tous entiers positifs veut construire circuit qui calcule code xvn partir des codes des lorsqu ils sont taille sait que taille sortie est par entier suffit alors prendre les constantes modulo les calculs par circuit modulo pour comme fin calcul taille circuit correspondant est bien polynomialement quant profondeur celle elle par log log pour dans classe pour dans classe ques ayant que les constantes aux feuilles arbre est donc pas artificiel proposer codage par des circuits ayant que les constantes aux portes que nous avions zpreval permanent conjecture notez que est dans que est taille est automatiquement les constantes circuit ont une taille par dans section suivante tous les circuits qui simulent des circuits utilisent les seules constantes simulation des circuits expressions nous nous dans cette section simuler une fonction une application par exemple par circuit nous disons que simule fonction nombre variables que fonction sur des dans analogue pour simulation application par anneau doit contenir lem suivant nous dit que donne simulation naturelle circuit par circuit profondeur taille sont convenables mais les peuvent mauvaises surprises lem circuit taille profondeur peut par circuit taille profondeur profondeur multiplicative reste donc des est cette simulation fonctionne sur tout anneau commutatif non trivial preuve les seules valeurs des sont donc sur importe quel anneau commutatif non trivial nous rappelons que dans les chapitres les seules sont qui nous contraint introduire des multiplications par constante pour faire des soustractions ceci implique que est par circuit profondeur versus simulation une expression par une expression lem suivant est une directe lem lem une expression taille peut par une expression profondeur par log log cette simulation fonctionne sur tout anneau commutatif non trivial particulier taille expression est proposition toute famille dans bpe est par une famille dans vpe cette simulation fonctionne sur tout anneau commutatif non trivial dans bur proposition est avec une terminologie bpe est contenu dans partie vpe une proposition analogue qui voudrait relier aussi simple les classes parce que traduction naturelle circuit circuit lem fournit trop grand autrement dit pas analogue satisfaisant lem pour les circuits supposons maintenant que nous ayons avec une double expressions une famille fonctions sortie est par exemple comme suit dans premier bit code signe les bits suivants codent entier sans signe binaire par exemple avec les entiers sont respectivement par alors aucune calculer par circuit par une expression profondeur log sortie dans partir son code nous pouvons alors proposition suivante qui proposition qui lem proposition soit une double dans bpe qui code une famille fonctions alors existe est par permanent conjecture une expressions dans vpe qui simule famille sur importe quel anneau contenant description circuit par une expression nous pouvons faire une des lemmes pour obtenir une description sens circuit lem soit circuit taille qui calcule une fonction existe une expression taille profondeur dlog cette expression utilise les seules constantes est valable sur tout anneau commutatif non trivial preuve applique simulation dans lem expression forme normale conjonctive construite lem doit simuler chacune des expressions base qui sont type type dans ces expressions est positif des positifs examen montre que taille maximum pour une telle simulation est reste ensuite faire produit expressions chacune correspond des composants dans les deux types obtient alors une expression taille profondeur dlog outre dans second membre une seule affectation des variables dans qui rend expression vraie lorsque que est nulle pour tout exception cette valeur nous les corollaires suivants versus proposition toute famille dans est par une famille dans ceci sur tout anneau commutatif non trivial toute famille dans est par une famille dans sur tout anneau commutatif contenant alors preuve supposons soit une famille dans remarquons que est taille par proposition cette famille est par une famille dans donc par une famille dans une telle famille par une famille dans lem fait utilisant des techniques nettement plus subtiles les suivants bur soit corps fini alors soit corps nulle supposons que riemann est vraie alors formes forme une fonction pour traiter les questions taille expressions circuits est priori prometteur une fonction par usuel une traduction simple consiste certaines valeurs fonction remplace fonction variables par suivant variables avec pour seuls exposants dans les nous dirons que est forme sur les variables fonction lorsque permanent conjecture chaque coefficient est une fonction qui doit lorsque une forme pure les coefficients sont tous une analogue est valable remplace par une application fonction est facile calculer correspondant aura ses coefficients faciles mais risque difficile puisqu aura nombre trop grand exponentiel coefficients non nuls alors comme des lem soit une fonction par circuit taille forme sur les variables admet une description sens par une expression profondeur taille cette expression utilise les seuls constantes est valable sur tout anneau commutatif non trivial preuve cela lem constatation suivante pour qui comme une expression profondeur dlog dlog taille donc fonction est par expression lem est admet pour description expression variables profondeur dlog taille forme une famille fonctions une famille fonctions admet pour forme sur les variables famille des qui sont les formes des binaire versus fonctions chose pour forme une famille comme corollaire lem valiant toute famille fonctions dans admet pour forme une famille dans qui convient pour tout anneau commutatif non trivial une famille dans admet pour forme une famille dans cette famille convient pour tout anneau contenant dans cas une fonction cela peut sembler peu puisqu priori est une classe difficile calculer elle simule bpe mais une bonne raison cela effet supposons toutes les variables alors calcule trouve nombre total des solutions somme est donc pas surprenant que soit priori plus difficile calculer que ses coefficients peut que une fonction soit aussi simple calculer que fonction valiant preuve est moyen puissant pour fabriquer des familles dans comme toutes les preuves que nous avons dans les chapitres preuve valiant est clairement uniforme donc est une famille dans prend pour mot par entier suivi puis mot alors forme admet pour description une famille uniforme circuits dans qui utilise les seules constantes qui donne correct sur tout anneau contenant binaire versus famille fonctions algorithmique notons ensemble des mots sur alphabet nous pouvons voir cet ensemble comme disjointe des algorithmique qui est sous forme binaire autrement dit toute instance correspond une permanent conjecture question comme pour certain entier question type oui non est comme peut comme fournissant pour chaque une fonction nous dirons que famille est famille fonctions algorithmique supposons maintenant que porte sur les graphes code naturel pour graphe sommets est matrice ajacence qui est une matrice dans cette matrice contient position seulement une qui dans graphe dans cas voit que famille fonctions est plus rellement comme une famille dira que algorithmique est dans une classe famille fonctions qui lui est naturellement est dans famille applications une fonction algorithmique maintenant une fonction algorithmique une fonction aurait envie faire calculer par ordinateur sortie sont binaire comme des supposons que est une majoration taille sortie fonction taille que fonction est pas plus difficile calculer que nous pouvons alors recalibrer fonction que taille sortie que taille son par exemple nous prenons fonction qui pour mot taille calcule mot nombre pour atteindre longueur est clair facilement partir cette convention nous permet associer toute fonction algorithmique une famille applications est une majoration taille sortie fonction binaire versus taille famille applications donc fonction majoration que dans les conditions nous dirons que famille est famille applications fonction algorithmique avec fonction majoration nous pas cette fonction majoration nous disons simplement que famille est une famille applications fonction algorithmique dira que fonction algorithmique est dans une classe famille est dans lorsque fonction est calculable dans une classe binaire connue choisira toujours fonction majoration suffisamment simple que fonction reste dans classe tout que nous venons dire applique par exemple une fonction vers modulo des codages binaires naturels convenables familles uniformes circuits algorithmique qui est sous forme binaire pour chaque une fonction donne pour les mots longueur cette famille peut sous forme une famille expressions sous forme une famille circuits binaire fois taille ces expressions ces circuits proprement algorithmique peut avoir produire expression circuit fonction unaire aspect correspond question famille uniforme donne temps polynomial termes familles circuits bdg soit algorithmique sous forme binaire est temps par une machine turing une seule bande peut construire temps une famille circuits qui famille fonctions permanent conjecture est temps polynomial par une machine turing seulement existe une famille uniforme circuits qui famille fonctions est important air mordre peu queue puisque famille doit uniforme calculable temps polynomial par une machine turing est pas trop est existence une machine turing universelle qui travaille temps polynomial fait que calcul sur une taille bout est bien celui peut programme main peut certifier calcul certifiant chaque quand bout compte dit sortie correctement peut aussi sous forme circuit qui fonctionne pour toute taille faut peu attention pour que tout ceci reste dans cadre taille polynomiale est genre argument qui permis cook fournir premier plus populaire des complets celui des expressions une expression une affecter les variables qui donne expression valeur vrai plus parlant que complet universel que nous avons page dans section nous donne proposition soit algorithmique sous forme binaire soit une fonction algorithmique est dans classe alors est dans est dans classe alors est dans fonction est dans classe alors elle est dans signification intuitive importante est que classe est exact analogue non uniforme classe soit effet algorithmique qui correspond une famille fonctions est dans classe signifie que est calculable par une famille uniforme circuits fortiori taille est polynomiale universel permanent est dans classe signifie que est calculable par une famille circuits dont taille est polynomiale que sont partir similaire partir une autre signification intuitive importante est que les classes sont les exacts analogues non uniformes des classes preuve est donne uniforme une famille arbitraire dans uniforme non famille fonctions obtient implication autrement dit conjecture non uniforme est plus forte que conjecture classique remarque vaut par universel permanent permanent par permanent une matrice aij sur anneau commutatif est les aij per par expression analogue celle obtenue les signes par les per pern aij parcourt toutes les permutations nous pern comme une famille variables sur anneau pas rapide permanent une matrice coefficients entiers sur aucun corps distincte permanent est laisse donc facilement lorsque les coefficients sont tous peut matrice comme donnant graphe une relation entre deux ensembles par exemple les sont des filles ceux sont des relation est relation ils veulent bien danser ensemble alors permanent matrice correspondante compte nombre distinctes permanent conjecture plir piste danse sans laisser personne sur bord ainsi famille par pern est une famille dans valiant page montre par ailleurs que famille pern est dans sur importe quel anneau commutatif effet famille pern est autre que forme famille des fonctions qui testent une matrice dans est une matrice permutation deux valiant sur permanent valiant universel permanent fois binaire calcul permanent pour les matrices coefficients dans est sur corps plus sur anneau dans lequel est inversible famille pern est universelle pour classe toute famille dans est une famille pern les preuves ces deux sont pour nous recommandons bur conjecture valiant petit tableau les analogies entre classes dans les colonnes interviennent des familles non uniformes expressions circuits dans colonne sim nous indiquons simulation cas est connue comme sur ligne deux points interrogation signifient croit possible rappelons que dans colonne binaire toutes les inclusions descendant sont strictes que les inclusions correspondantes dans cas colonne sont aussi strictes conjecture valiant petit analogies entre binaire binaire sim bpe vpe oui vqp vqpe bpe oui valiant conjecture pour tout corps cette conjecture est analogue non uniforme conjecture algorithmique plus sur corps cette conjecture purement termes expressions permanent est pas une est sur les corps finis que conjecture semble plus significative parce que situation est plus proche cas elle est pas par taille arbitrairement grande dans corps disposait une uniforme qui famille pern une famille dans alors calcul permanent une matrice dans serait dans classe donc aurait par plus page montre que implique pour tout corps fini sous riemann pour tout corps nulle par ailleurs avait calcul permanent une matrice dans serait dans classe donc foriori dans aurait mais pas pour autant permanent conjecture conjecture valiant est existe aucune sans restrictive qui famille pern une famille dans avantage conjecture valiant est elle est purement qui parle uniquement taille une certaine famille par des familles circuits comme des aspects les plus conjecture cela pas toujours million dollars mais cela toujours excitant tient question des familles circuits jeu contournerait cet obstacle conjecture analogue non uniforme plus forte forme purement serait plus notre une preuve serait pas important qui chemin pour une preuve qui implique cela pourrait enfin une preuve petit ennui dans cette suite informelles les deux points interrogation sur ligne petit tableau comme vqp vqpe conjecture valiant savoir pour tout corps vqp est par certains auteurs comme encore plus instructive pour algorithmique analogue sur corps cela permanent est pas une notons que que vqp sur les corps nulle voir bur milliardaire qui aimerait devenir prix million dollars pour celui celle qui six autres conjectures importantes sont prix analogue million dollars est ailleurs pas grand chose que gagne bon joueur football rien tout par rapport avion furtif ceci tendrait dire milliardaire peut devenir avec investissement modeste notez que vous que vous aurez droit admiration tou les mais vous aurez pas million dollars correspondant est certainement injuste mais est ainsi annexe codes maple nous donnons dans les pages qui suivent les codes maple des algorithmes qui calculent dont nous avons les performances les codes sont ici dans version maple mais les tests ont faits avec version maple les sont les suivantes version maple grandement son calcul standard basant sur algorithme berkowitz dans maple dernier objet est par alors que dans maple par enfin dans maple une termine par end proc tandis que dans maple elle termine par end les algorithmes que nous avons sont ceux berkowitz berkodense barmodif faddeev chistov chistodense leurs versions modulaires respectives nous donnons ici berkomod ainsi que les algorithmes correspondant interpolation lagrange celle hessenberg celle respectivement interpoly hessenberg kalto plus fonction charpoly faisant partie package linalg maple que nous avons linalpoly dans nos tableaux comparaison nous avons berkodense chistodense cas des matrices creuses voir les codes berksparse chisparse les mesures temps cpu pour chaque algorithme sont prises aide des fonctions time bytesalloc noyau maple annexe somme des une liste fractions rationnelles somme proc suite list ratpoly normal convert suite end proc berkowitz dans cas une matrice dense berkodense proc matrix name local coldim table somme seq somme seq somme seq somme seq min somme seq collect end proc codes maple berkowitz dans cas une matrice creuse berksparse proc matrix name local coldim table vector union somme seq somme seq somme seq somme seq min somme seq collect end proc nous avons les codes maple correspondant algorithme berkowitz cas les coefficients appartiennent type lisvar obtient une berkomod dans nos tableaux comparaison qui prend entier positif une liste lisvar une liste ideal lisvar matrice lisvar pour donner sortie berkomod ainsi que les versions modulaires des autres annexe algorithmes utilisent comme polmod qui prend nombre entier lisvar donne sortie simple image canonique dans lisvar ome modulo polmod proc polynom lisvar list ideal list posint local nops lisvar nops ideal error number polynomials must equal number variables nops lisvar rem ideal lisvar mod sort end proc les deux calculs base modulo somme une liste produit deux somme une liste modulo sommod proc list polynom lsv list name lsp list polynom posint polmod somme lsv lsp end proc evaluation produit modulo ideal promod proc polynom lsv list name lsp list polynom posint polmod lsv lsp end proc reste plus berkodense les somme une liste produit deux par les calculs modulaires par sommod promod codes maple berkowitz modulaire berkomod proc matrix name lsv list name lsp list polynom posint local coldim table seq promod lsv lsp sommod lsv lsp seq promod lsv lsp sommod lsv lsp seq promod lsv lsp sommod lsv lsp seq promod lsv lsp min sommod lsv lsp somme seq collect end proc voici maintenant sans plus commentaire les codes maple des algorithmes chistodense chisparse barmodif faddeev interpoly hessenberg kalto annexe chistov cas des matrices denses chistodense proc matrix name local coldim array array normal somme seq somme seq somme seq somme seq subs inversf collect end proc calcul inverse modulo ome inversf proc collect convert series polynom normal end proc cette dans les algorithmes chistov sera aussi utile dans algorithme kalto codes maple chistov cas des matrices creuses chisparse proc matrix name local coldim array array array non nuls ligne union fin construction normal seq intersect somme seq intersect somme somme seq somme seq subs inversf collect end proc annexe barmodif proc matrix name local piv dencoe den coldim copy evalm array identity piv coe normal piv den piv piv sort collect piv end proc faddeev proc matrix name local coldim array array identity copy map normal multiply trace map normal evalm somme seq sort end proc interpolation lagrange interpoly proc matrix name local coldim array identity det evalm seq interp end proc codes maple hessenberg hessenberg proc matrix name local jpiv ipiv iciv piv initialisations coldim copy forme hessenberg jpiv ipiv iciv ipiv piv normal iciv jpiv piv iciv iciv piv normal iciv jpiv piv iciv ipiv swaprow ipiv iciv echange lignes swapcol ipiv iciv echange colonnes normal jpiv addrow ipiv manipulation lignes addcol ipiv manipulation colonnes map normal calcul ome normal normal normal collect ome end proc annexe developpement ordre devlim proc ratpoly name integer convert series polynom collect normal end proc kalto proc matrix name local coldim initialisation stre stra vector evalm calcul des copy vector multiplication par somme seq multiplication par somme seq devlim polgenmin sort subs res collect normal end proc codes maple dans kalto polgenmin pour calcul ome minimal une suite ici anneau base est anneau des polgenmin proc vector name name integer local ilc ill somme seq quo ill traiter collect normal lcoeff ilc inversf devlim ilc quo devlim devlim devlim ill ill ilc sort sort collect lcoeff ilc inversf devlim ilc collect normal end proc annexe vecteur centre des divisions stre proc local vector binomial floor eval end proc matrice centre des divisions stra proc local array sparse floor binomial floor evalm end proc liste des algorithmes circuits programmes algorithme pivot gauss algorithme pivot gauss lup une matrice surjective algorithme algorithme dodgson pour une matrice hankel algorithme hessenberg algorithme verrier algorithme algorithme preparata sarwate version algorithme berkowitz principe algorithme berkowitz version algorithme chistov principe algorithme chistov version simple algorithme jorbarsol algorithme frobenius algorithme programme une matrice ordre circuit algorithme pivot gauss programme matrice programme une matrice des divisions strassen liste des algorithmes calcul des exemples circuits produit deux karatsuba transformation fourier multiplication par blocs produit strassen deux matrices ordre lup bunch hopcroft algorithme algorithme csanky algorithme preparata sarwate liste des figures construction des circuits binaires construction des circuits construction des circuits construction des circuits produit matriciel trou bini produit matriciel plein par blocs bini une fois bini deux fois produit trous extrait bini deux fois exemple arbitraire produit matriciel trous exemple une fois somme directe deux produits matriciels somme directe somme directe variante calcul calcul calcul des produits partiels arbre horner une expression une expression une expression arbre horner une expression bibliographie ahu aho hopcroft ulmann design analysis computer algorithms addison wesley reading bdg structural complexity second edition texts theoretical computer science springer bha bhaskara rao theory generalized inverses commutative ring taylor francis londres bini pan polynomial matrix computations vol fundamental algorithms bur completeness reduction algebraic complexity theory springer bcs clausen shokrollahi algebraic complexity theory springer cia ciarlet introduction analyse matricielle optimisation dunod coh cohen course computational algebraic number theory graduate texts maths vol springer ccs cohen cuypers sterk eds tapas computer algebra algorithms computation mathematics vol springer cosnard trystram algorithmes architectures intereditions paris dur durand solutions des tome plusieurs valeurs propres des matrices masson paris bibliographie faddeev faddeeva computational methods linear algebra freeman san francisco faddeev sominskii collected problems higher algebra problem gabriel roiter representations algebras springer von zur gathen gerhard modern computer algebra cambridge university press gan gantmacher des matrices tome dunod paris gas gastinel analyse hermann paris gob goblot commutative masson paris golub van loan matrix computations hopkins univ press baltimore london gkw grabmeier kaltofen weispfenning eds computer algebra handbook foundations applications systems springer jac jacobson lectures abstract algebra basic concepts van nostrand toronto springer kko ker complexity theory real functions knu knuth art computer programming vol seminumerical algorithms second edition addison wesley publishing kou koulikov des nombres mir moscou lancaster tismenetsky theory matrices second edition academic press min minc permanents encyclopedia mathematics applications vol reading mrr mines richman ruitenburg course constructive algebra universitext springer bibliographie pan pan multiply matrices faster lecture notes computer science springer verlag rob robert impact vector parallel architectures gaussian elimination algorithm manchester univ press halsted press john wiley sons brisbane toronto sav savage complexity computing robert krieger pub malabar florida sch schrijver theory integer linear programming john wiley ser serre cours puf paris ste stern fondements informatique paris tur turing girard machine turing seuil points sciences paris weg wegener complexity boolean functions wileyteubner series computer science stuttgart articles abdeljaoued algorithmes rapides pour calcul abdeljaoued berkowitz algorithm maple computing characteristic polynomial arbitrary commutative ring computer algebra mapletech birkhauser boston abdeljaoued malaschonok efficient algorithms computing characteristic polynomial domain journal pure applied algebra bareiss sylvester identity multistep gaussian elimination math bibliographie baur strassen complexity partial derivatives theoretical computer science berkowitz computing determinant small parallel time using small number processors information processing letters bini relation exact approximate bilinear algorithms applications calcolo bini capovani lotti romani complexity matrix multiplication inf proc letters borodin von zur gathen hopcroft parallel matrix gcd computations information control brent parallel evaluation general arithmetic expressions assoc comp bunch hopcroft triangular factorization inversion fast matrix multiplication math structure valiant complexity classes discr math theoret comp cantor kaltofen fast multiplication polynomials arbitrary rings acta informatica chandra maximal parallelism matrix multiplication report watson research center yorktown heights chemla entre remarques sur commentaire liu hui aux neuf chapitres sur les recherche histoire des sciences philosophie arabes les comme champ des algorithmes dans les neuf chapitres sur les leurs commentaires des bibliographie chistov fast parallel calculation rank matrices field arbitrary characteristic proc fct springer lecture notes computer science coppersmith winograd asymptotic complexity matrix multiplication siam coppersmith winograd matrix multiplication via arithmetic progressions proc ann acm symp theory computing coppersmith winograd matrix multiplication via arithmetic progressions proc symbolic computation cook complexity theorem proving procedures proc ann acm symp theory computing csanky fast parallel inversion algorithms siam dantzig maximization linear function variables subject linear inequalities activity analysis production allocation koopmans wiley gonzalez vega lombardi generalizing cramer rule solving uniformly linear systems equations siam journal matrix analysis applications gonzalez vega lombardi modules projectifs type fini applications inverses journal algebra dodgson condensation determinants new brief method computing arithmetic values proc royal soc dornstetter equivalence berlekamp euclid algorihms ieee trans inform theory bibliographie della dora dicrescenzo duval new method computing algebraic number fields eurocal vol springer lecture notes computer science eberly fast parallel matrix polynomial arithmetic technical report phd thesis university toronto canada fich von zur gathen rackoff complete families polynomials manuscript frame simple recurrent formula inverting matrix abstract bull amer math galil pan parallel evaluation determinant inverse matrix information processing letters gastinel sur calcul produits matrices num von zur gathen parallel arithmetic computations survey proc internat symp mathematical foundations computer science lecture notes computer science springer berlin von zur gathen feasible arithmetic computations valiant hypothesis journal symbolic computation von zur gathen parallel linear algebra synthesis parallel algorithms reif editor morgan kaufmann publishers san mateo californie giesbrecht fast algorithms rational forms integer matrices international symposium symbolic algebraic computation oxford acm press giesbrecht nearly optimal algorithms canonical matrix forms siam journal computing giusti heintz des points dimension une peut faire temps polynomial computational algebraic geometry commutative algebra eds eisenbud robbiano cambridge university press bibliographie giusti heintz morais morgenstern pardo programs geometric elimination theory pure appl algebra gonzalez vega lombardi recio roy suite sturm informatique applications rouillier roy trujillo symbolic recipes real solutions dans ccs tensor rank automata languages programming proc international colloquium springer lecture notes computer science heintz sieveking lower bounds polynomials algebraic coefficients theoretical computer science hoover feasible real functions arithmetic circuits siam hopcroft musinski duality applied complexity matrix multiplication bilinear forms siam hyafil parallel evaluation multivariate polynomials siam kakeya fundamental systems symmetric functions jap kaltofen computing determinants matrices without divisions acm kaltofen pan processor efficient parallel solution linear systems abstract field proc ann parallel algo architectures acm press july kaltofen pan processor efficient parallel solution linear systems general case proc ieee symp foundations computer science pittsburg usa bibliographie kaltofen singer size efficient parallel algebraic circuits partial derivatives proc intern conf computer algebra physical research world scientific singapour kaltofen villard complexity computing determinants proc fifth asian symposium computer mathematics ascm kaltofen villard computing sign value determinant integer matrix complexity survey dans computational applied math special issue international calcul symbolique rabat maroc may pages karmakar new polynomial time algorithm linear programming combinatorica karp lipton turing machines take advice logic arithmetic int zurich monogr enseign math karp ramachandran parallel algorithms machines handbook theoretical computer science edited van leeuwen elsevier science publishers fast algorithms characteristic polynomial theoretical computer science friedman computational complexity real functions theoretical computer science khachiyan polynomial algorithms linear programming soviet math doklady khachiyan polynomial algorithm linear programming computational mathematics mathematical physics kruskal rudolph snir complexity theory efficient parallel algorithms theoretical computer science bibliographie labhalla lombardi moutai espaces rationnellement cas espace des fonctions continues sur intervalle compact theoretical computer science ladner fischer parallel prefix computation journal acm verrier sur les variations des elliptiques des sept principales mercure terre mars jupiter saturne uranus math pures lickteig roy sequences fast cauchy index computation journal symbolic computation lombardi roy safey din new structure theorems subresultants journal symbolic computation loos generalized polynomial remainder sequences computer algebra symbolic algebraic computation berlin matera turull torres space complexity elimination upper bounds foundations computational mathematics rio janeiro springer berlin miller ramachandran kaltofen efficient parallel evaluation code arithmetic circuits siam comput moenck fast computation gcds proc stoc morgenstern compute fast function derivatives variation theorem sigact news mulmuley fast parallel algorithm compute rank matrix arbitrary field combinatorica ostrowski two problems abstract algebra connected horner rule studies math mec presented richard von mises academic press bibliographie pan computation schemes product matrices inverse matrix uspehi mat nauk prasad bapat generalized inverse linear algebra appl preparata sarwate improved parallel processor bound fast matrix inversion inf proc letters revol circuits institut national polytechnique grenoble samuelson method determining explicitely characteristic equation ann math sasaki murao efficient gaussian elimination method symbolic determinants linear systems acm trans math software fast parallel computation characteristic polynomials verrier power sum method adapted fields finite characteristic proc icalp lecture notes computer science springer partial total matrix multiplication siam skyum valiant complexity theory based boolean algebra assoc comp souriau une pour spectrale inversion des matrices acad sciences spira time hardware complexity tradeoffs boolean functions proceedings fouth hawaii international symposium system sciences strassen gaussian elimination optimal numerische mathematik strassen vermeidung von divisionen crelle reine angew strassen polynomials rational coefficients hard compute siam bibliographie strassen work valiant proc international congress mathematicians berkeley usa strassen relative bilinear complexity matrix multiplication crelle reine angew strassen algebraic complexity theory handbook theoretical computer science van leeuwen vol chap elsevier science publishers valiant completeness classes algebra proc acm stoc valiant complexity computing permanent theoretical computer science valiant reducibility algebraic projections logic algorithmic symposium honour ernst specker enseign math valiant skyum berkowitz rackoff fast parallel computation polynomials using processors siam wiedemann solving sparse linear equations finite fields trans inf theory winograd multiplication matrices linear algebra index des termes affectation sans scalaires module anneau des ordre anneau des non commutatifs application quadratique approximation ordre une application centre des divisions chistov algorithme circuit avec divisions sans division code entiers coefficient gram une matrice comatrice compagnon matrice une famille circuits binaire binaire une famille circuits multiplicative conjecture addititve cramer berkowitz algorithme algorithme bien algorithme application calcul binaire calcul approximatif quadratique description une fonction formule matrice transformation des divisions strassen les divisions strassen dans circuit entier essentielle multiplication exposant acceptable pour multiplication des matrices multiplication des matrices expression famille uniforme circuits circuits fonction forme une famille fonctions une fonction forme normale conjonctive disjonctive formule index des termes samuelson frobenius algorithme matrice gauss pivot gram coefficient coefficient hadamard hankel matrice hessenberg algorithme horner expression expression inverse rang inverse rang kakeya krylov verrier algorithme algorithme index des termes longueur programme multiplicative stricte lup matrice adjointe adjointe compagnon unitaire frobenius hankel toeplitz fortement rang une trace une triangulaire unimodulaire unitriangulaire mineur connexe une matrice principal module libre sur anneau inverse newton relation somme non complet famille applications famille expressions famille expressions famille circuits famille circuits famille fonctions famille famille fonctions famille expressions famille fonctions famille applications fonctions programme polylogarithmique cyclotomique une suite minimal une suite minimal minimal une suite gram preparata sarwate algorithme produit matriciel trous profondeur programme une expression multiplicative stricte programme largeur longueur profondeur taille variable affectation projection une expression quadratique index des termes application calcul racine primitive principale rang marginal rang tensoriel une application marginal relation newton creuse dense par expressions samuelson asymptotique variante csanky simulation somme newton somme directe deux applications somme disjointe deux applications algorithme principale principale dominante suite sylvester fondamental index des termes fractions rationnelles taille programme une expression une expression tenseur chinois toeplitz matrice transformation unimodulaire triangulaire matrice unimodulaire transformation unitriangulaire matrice valeur propre une une wiedemann algorithme
0
coded caching schemes reduced subpacketization linear block codes feb tang aditya ramamoorthy department electrical computer engineering iowa state university ames emails litang adityar caching technique generalizes conventional caching promises significant reductions traffic caching networks however basic coded caching scheme requires file hosted server partitioned large number subpacketization level subfiles practical perspective problematic means prior schemes applicable size files extremely large work propose coded caching schemes based combinatorial structures called resolvable designs structures obtained natural manner linear block codes whose generator matrices possess certain rank properties obtain several schemes subpacketization levels substantially lower basic scheme cost increased rate depending system parameters approach allows operate various points subpacketization level rate tradeoff index caching resolvable designs cyclic codes subpacketization level ntroduction caching popular technique facilitating large scale content delivery internet traditionally caching operates storing popular content closer end users typically cache serves end user file request partially sometimes entirely remainder content coming main server prior work area demonstrates allowing coding cache coded transmission server referred coded caching end users allow significant reductions number bits transmitted server end users exciting development given central role caching supporting significant fraction internet traffic particular reference considers scenario single server contains files server connects users shared link user cache allows store fraction files server coded caching consists two distinct phases placement phase delivery phase placement phase caches users populated phase depend user demands assumed arbitrary delivery work supported part national science foundation grants paper presented part ieee workshop network coding applications netcod ieee international symposium information theory isit phase server sends set coded signals broadcast user user demand satisfied original work considered case centralized coded caching server decides content needs placed caches different users subsequent work considered decentralized case users populate caches randomly choosing parts file respecting cache size constraint recently several papers examined various facets coded caching include tightening known bounds coded caching rate considering issues respect decentralized caching explicitly considering popularities files network topology issues synchronization issues work examine another important aspect coded caching problem closely tied adoption practice important note huge gains caching require file partitioned subfiles equal size referred subpacketization level observed fixed cache size grows exponentially problematic practical implementations instance suppose rate case evident bare minimum size file least terabits leveraging gains even worse practice atomic unit storage present day hard drives sector size bytes trend disk drive industry move bytes result minimum size file needs much higher terabits therefore scheme practical even moderate values furthermore even smaller values schemes low subpacketization levels desirable practical scheme require subfiles header information allows decoding end users large number subfiles header overhead may parameters proposed approach work allows obtain following operating points iii first point evident subpacketization level drops five orders magnitude small increase rate point iii show proposed scheme allows operate various points tradeoff subpacketization level rate issue subpacketization first considered work decentralized coded caching setting centralized case considered work proposed low subpacketization scheme based placement delivery arrays reference viewed problem hypergraph perspective presented several classes coded caching schemes work recently shown exist coded caching schemes subpacketization level grows linearly number users however result applies number users large elaborate related work section work propose low subpacketization level schemes coded caching proposed schemes leverage properties combinatorial structures known resolvable designs natural relationship linear block codes schemes applicable wide variety parameter ranges allow system designer tune subpacketization level gain system respect uncoded system note designs also used obtain results distributed data storage network coding based function computation recent work paper organized follows section discusses background related work summarizes main contributions work section iii outlines proposed scheme includes constructions essential proofs central object study work matrices satisfy property call consecutive column property ccp section overviews several constructions matrices satisfy property several longer involved proofs statements sections iii appear appendix section perform comparison work existing constructions literature conclude paper discussion opportunities future work section columns correspond points blocks respectively let otherwise observed transpose incidence matrix also specifies design refer transposed design work utilize resolvable designs special class designs definition parallel class design subset disjoint blocks whose union partition several parallel classes called resolution said resolvable design least one resolution resolvable designs follows point also appears number blocks example consider block design specified follows incidence matrix given observed design resolvable following parallel classes sequel let denote set emphasize original scheme viewed instance trivial design example consider setting integer let scheme users associated subfiles user caches subfile main message work carefully constructed resolvable designs used obtain coded caching schemes low subpacketization levels retaining much rate gains coded caching basic idea associate users blocks subfiles points design roles users subfiles also interchanged simply working transposed design background elated ork ummary ontributions consider scenario server files consist subfiles users equipped cache size subfiles coded caching scheme specified means placement scheme appropriate delivery scheme possible demand pattern work use combinatorial designs specify placement scheme coded caching system definition design pair example consider resolvable design example blocks correspond six users file partitioned subfiles correspond four points cache user denoted specified zij example note caching scheme symmetric respect files server furthermore user caches half file suppose delivery phase user requests file wdb set elements called points collection nonempty subsets called blocks block contains number points design correspondence incidence matrix defined follows definition incidence matrix design binary matrix dimension rows rate benefits coded caching would lost scale exponentially following work authors introduced technique designing low subpacketization schemes centralized setting called placement delivery arrays considered setting demonstrated scheme subpacketization level exponentially smaller original scheme rate marginally higher scheme viewed special case work discuss aspects detail section design coded caching schemes achieved design hypergraphs appropriate properties particular specific problem parameters able establish existence schemes subpacketization scaled exp reference presented results setting considering strong edge coloring bipartite graphs recently showed existence coded caching schemes subpacketization grows linearly number users coded caching rate grows thus rate constant grow linearly either interesting results demonstrate existence regimes subpacketization scales manageable manner nevertheless noted results come several caveats example result valid regime large unlikely use practical values result significant restrictions users paper needs form demands satisfied follows pick three blocks one parallel classes generate signals transmitted delivery phase follows three terms correspond blocks different parallel classes equation structure also exploited user caches one subfiles participating equation specifically user contains thus decode subfile needs similar argument applies users verified three equations also property thus end delivery phase user obtains missing subfiles scheme corresponds subpacketization level rate contrast scheme would require subpacketization level rate thus evident gain significantly terms subpacketization sacrificing rate gains shown example obtain scheme associating users blocks subfiles points work demonstrate basic idea significantly generalized several schemes low subpacketization levels continue leverage much rate benefits coded caching obtained summary contributions work subpacketization levels obtain typically exponentially smaller original scheme however still continue scale exponentially albeit much smaller exponents however construction advantage applicable large range problem parameters specific contributions include following uncover simple natural relationship linear block code coded caching scheme first show linear block code cases mod prime prime power generates resolvable design design turn specifies coded caching scheme users cache fraction complementary cache fraction point integer also obtained intermediate points obtained memory sharing points consider class linear block codes whose generator matrices satisfy specific rank property particular require collections consecutive columns certain rank properties codes able identify efficient delivery phase determine precise coded caching rate demonstrate subpacketization level whereas coded caching gain scales respect uncoded caching scheme thus different choices discussion related work coded caching subject much investigation recent work discussed briefly earlier overview existing literature topic low subpacketization schemes coded caching original paper given problem parameters number users cache fraction authors showed rate equals integer multiple points obtained via memory sharing thus regime large coded caching rate approximately independent crucially requires subpacketization level observed fixed grows exponentially one main drawbacks original scheme reasons outlined section deploying solution practice may difficult subpacketization issue first discussed work context decentralized caching specifically showed decentralized setting subpacketization level exp rate would scale linearly thus much let consider equation gab allow system designer significant flexibility choose appropriate operating point discuss several constructions generator matrices satisfy required rank property characterize ranges alphabet sizes matrices constructed one given subpacketization budget specific setting able find set schemes fit budget leveraging rate gains coded caching fixed arbitrary values equation unique solution implies forms parallel class remark generator matrix prime power also considered matrix extension field integer thus one obtain resolvable design case well corresponding parameters calculated easy manner iii roposed low subpacketization level scheme constructions low subpacketization schemes stem resolvable designs definition overall approach first show linear block code used obtain resolvable block design placement scheme obtained resolvable design certain mild conditions generator matrix show delivery phase scheme designed allows significant rate gain uncoded scheme subpacketization level significantly lower furthermore scheme transformed another scheme operates point thus intermediate values obtained via memory sharing also discuss situations operate modular arithmetic mod necessarily prime prime power allows obtain larger range parameters remark also consider linear block codes mod necessarily prime prime power case conditions resolvable design obtained forming matrix little involved discuss lemma appendix example consider linear generator matrix vector represents codeword code let point set collection subsets observed resolution definition following parallel classes using construction obtain following result lemma construction procedure results design furthermore design resolvable parallel classes given special class linear block codes introduce special class linear block codes whose generator matrices satisfy specific rank properties turns resolvable designs obtained codes especially suited usage coded caching consider generator matrix linear block code column denoted let least positive integer divides denoted let denote mod construction need consider various collections consecutive columns wraparounds proof let gab gab note constructed follows using generate resolvable block design point set instance block obtained identifying column indexes zeros first row following obtain consider linear block code avoid trivialities assume generator matrix column collect codewords construct matrix size follows collecting nine codewords resolvable design construction ctqk block code gab boundaries allowed purpose let integer let gsa submatrix specified columns column gsa next define column property central rest discussion algorithm placement scheme input resolvable design constructed linear block code let least positive integer divide file subfiles thus user caches output cache content user denoted definition column property consider submatrices specified gsa say satisfies column property submatrices gsa full rank henceforth abbreviate column property parallel classes example example hence thus corresponding generator matrix satisfies ccp two columns submatrices gsi linearly independent note one also define different levels consecutive column property let least positive integer definition column property consider submatrices specified say satisfies column property full rank words columns linearly independent recovery sets fig recovery set bipartite graph general see algorithm users file divided subfiles subfile cached user therefore user caches total subfiles file consists subfiles remains show design delivery phase scheme satisfies possible demand pattern suppose delivery phase user requests file wdb server responds transmitting several equations satisfy user equation allows users different parallel classes simultaneously obtain missing subfile delivery scheme set transmitted equations classified various recovery sets correspond appropriate collections parallel classes example fig turns recovery sets defined correspond precisely sets earlier illustrate means example pointed sequel codes satisfy ccp result caching systems multiplicative rate gain uncoded system likewise codes satisfy gain uncoded system remainder paper use term ccp refer value clear context usage coded caching scenario resolvable design generated linear block code satisfies ccp used coded caching scheme follows associate users blocks subfile associated point additional index placement scheme follows natural incidence blocks points formal description given algorithm illustrated example example consider resolvable design example recall blocks correspond twelve users file partitioned subfiles denoted cache user uabc denoted zabc specified zabc corresponds coded caching system user caches file example consider placement scheme specified example let user request file wdb recovery sets specified means recovery set bipartite graph shown fig corresponds outgoing edges parallel class labeled arbitrarily numbers delivery scheme user recovers missing subfiles specific superscript recovery set corresponding parallel class participates instance user parallel class recovers missing subfiles superscript superscript superscript superscripts labels outgoing edges bipartite graph verified user lies recovers missing subfiles superscript equations algorithm signal generation algorithm psa input psa label psa signal set sig user recover missing subfiles superscript pick blocks pick blocks distinct parallel classes psa intersection empty equations benefits three users generated simply choosing block last block intersection blocks empty fact equations useful problem hand consequence ccp process generating equations applied possible recovery sets shown allows users satisfied end procedure let determine missing subfile index user recover add signal sig user ubs demands file equation allows recover corresponding missing subfile index superscript determined recovery set bipartite graph follows first show recovery set psa possible generate equations benefit users simultaneously claim consider resolvable design constructed described section linear block code satisfies ccp let psa subset parallel classes corresponding emphasize consider blocks bik lik lij picked distinct parallel classes psa bij lij end output signal set sig see consider system equations variables proving claim discuss application delivery phase note claim asserts blocks chosen distinct parallel classes intersect precisely one point suppose one picks users distinct parallel classes intersection empty blocks equivalently users participate equation benefits users particular user recover missing subfile indexed intersection blocks emphasize claim core delivery phase course need justify enough equations found allow users recover missing subfiles follows natural counting argument made formally subsequent discussion superscripts needed counting argument gbik lik ccp vectors gik linearly independent therefore system equations variables unique solution result follows provide intuitive argument delivery phase recall form recovery set bipartite graph see fig example parallel classes recovery sets disjoint vertex subsets edges incident parallel class labeled arbitrarily parallel class psa denote label label given recovery set psa delivery phase proceeds choosing blocks distinct parallel classes psa intersection empty provides equation benefits users turns equation allows user parallel class psa recover missing subfile superscript label psa formal argument made algorithm ease notation algorithm denote demand user ubi proof following construction section note block specified consider bik lik lij picked distinct parallel classes psa assume let denote submatrix obtained retaining rows show vector lik column appears claim consider user belonging parallel class psa signals generated algorithm recover missing subfiles needed superscript proof let psa arguments argue user demands file recover missing subfiles superscript note thus user needs obtain missing subfiles superscript consider iteration loop block picked step equation algorithm allows recover claim next count number equations participates pick users distinct parallel classes psa done ways claim ensures blocks chosen intersect single point next pick block remaining parallel class psa intersection blocks empty done ways thus total equations user participates remains argue equation provides distinct subfile towards end let index set suppose exist sets blocks bik lik bik bik lik bik bij lij bij contradiction since bij turn implies bij lij impossible since two blocks parallel class empty intersection algorithm symmetric respect blocks parallel classes belonging psa required result obtaining scheme construction works system turns converted scheme thus convex combination two points obtained towards end note class coded caching schemes considered specified equationsubfile matrix inspired hypergraph formulation placement delivery array pda based schemes coded caching equation assumed type form wdtm ajm property user cache subfile caches subfiles ajs coded caching system corresponds equationsubfile matrix follows associate row equation column subfile denote row eqi column value equation user recovers subfile wdt otherwise suppose equations allow user satisfy demands corresponds valid coded caching scheme hard see placement scheme obtained examining namely user caches subfile corresponding column integer appear column example consider coded caching system denote four users suppose matrix scheme specified overall delivery scheme repeatedly applies algorithm recovery sets lemma proposed delivery scheme terminates allows user demand satisfied furthermore transmission rate server subpacketization level proof see appendix upon examining evident instance user caches subfiles number appear corresponding columns similarly cache placement users obtained interpreting placement scheme terms assignment verified design obtained corresponds transpose scheme considered example also scheme main requirement lemma hold recovery set bipartite graph biregular multiple edges pair nodes disallowed degree parallel class hard see follows definition recovery sets see proof appendix details analogous manner one starts generator matrix code satisfies obtain following result stated details quite similar discussion found appendix section lemma consider matrix whose entries belong set corresponds valid coded caching system following three conditions satisfied integer appearing column integer appearing row corollary consider coded caching scheme obtained forming resolvable design obtained code satisfies let least positive integer delivery scheme constructed transmission rate subpacketization level proof placement scheme obtained discussed earlier user caches subfiles integer appear column therefore matrix corresponds placement scheme next discuss delivery scheme note eqi corresponds equation follows memory sharing convex combination points achievable similar manner linear block code satisfies caching system converted system using technique arguments presented apply essentially change wdtm ajm equation allow users recover subfiles simultaneously cache caches ajs evident cache owing placement scheme next guarantee condition need show integer appear column ajs towards end condition next consider entries lie column ajs row eqi assume exists entry contradiction condition finally condition guarantees missing subfile recovered ome classes linear codes satisfy ccp point established linear block codes satisfy ccp attractive candidates usage coded caching section demonstrate large class generator matrices satisfy ccp section work matrices finite field order last subsection discuss constructions matrices mod prime prime power summarize constructions presented section table user caches fraction number columns entry similarly transmission rate given crucial point transpose also corresponds coded caching scheme follows directly fact also satisfies conditions lemma particular corresponds coded caching system users subfiles placement phase cache size delivery phase transmitting equations corresponding rows missing subfiles recovered transmission rate applying discussion context consider matrix corresponding coded caching system corresponds system transmission rate following theorem main result paper mds codes codes minimum distance clearly class codes satisfy ccp fact codes columns generator matrix shown full rank note however mds codes typically need large field size assuming mds conjecture true construction value number users thus large obtain systems small values equivalently large values theorem may restrictive practice cyclic codes cyclic code linear block code circular shift codeword also codeword cyclic code specified monic polynomial coefficients needs divide polynomial generator matrix cyclic code obtained theorem consider linear block code satisfies ccp corresponds coded caching scheme users files server user cache size let least positive integer following claim shows verifying ccp cyclic code suffices pick set consecutive columns claim consider cyclic code generator matrix let denote set consecutive columns submatrix full rank satisfies proof let generator polynomial cyclic code note degree let code type code construction notes mds codes satisfy need cyclic codes existence depends certain properties generator polynomials cyclic codes satisfy need additional conditions kronecker product matrix satisfying identity matrix satisfy kronecker product vandermonde matrices structured base matrices satisfy certain parameters ccp matrix extension extends ccp matrix ccp matrix integer codes field codes ring mod single spc code satisfy cyclic codes ring require prime satisfy kronecker product matrix satisfying identity matrix satisfy property ccp matrix extension extends ccp matrix ccp matrix integer table summary different constructions ccp matrices ection assume satisfies let need show full rank full rank full rank codeword definition cyclic code circular shift codeword results another codeword belongs code therefore codeword thus full rank submatrices full rank expression submatrices full rank example consider polynomial since divides generator polynomial cyclic code generator matrix code given claim implies low complexity search algorithm determine cyclic code satisfies ccp instead checking gsa definition need check arbitrary simplify search choose choice claim shows need check rank list matrices determine submatrix full rank proof appears appendix verified submatrix consists two leftmost columns three rightmost columns submatrices full rank thus claim satisfied claim cyclic code generator matrix satisfies ccp following conditions hold remark cyclic codes form important class codes satisfy definition consecutive columns generator matrix cyclic code linearly independent proof proof leverages idea expressed succinctly using kronecker products arguments found appendix constructions leveraging properties smaller base matrices consider case construct linear code satisfy ccp noted constraint field size looser corresponding constraint claim well recognized cyclic codes necessarily exist choice parameters divisibility requirement generator polynomial discuss general construction generator matrices satisfy ccp shall see construction provides less satisfactory solution large range system parameters first simple observation kronecker product denoted generator matrix satisfies identity matrix immediately yields generator matrix satisfies claim consider consider linear block code whose generator matrix specified follows distinct satisfies ccp claim consider linear block code whose generator matrix specified matrix satisfies satisfies proof see appendix given code satisfies ccp use obtain higher values simple manner discussed claim proof recovery set specified saz recovery set specified sak since satisfies asaz full rank note gsak asaz det gsak det asaz det asaz therefore satisfies claim consider linear block code generator matrix satisfies ccp let first columns denoted submatrix matrix dimension remark let generator matrix cyclic code satisfies claim also satisfies ccp next construction addresses follows use following notation proof see appendix claim provide parameter choices possible code constructions example given may exist code however exists mds code claim obtain linear block code satisfies ccp similarly combining claim claim claim claim obtain linear block codes satisfy ccp result similar claim obtained specifically consider linear block code generator matrix satisfies let first columns matrix row cyclic shift one place right row first row matrix zero entries consider parameters let greatest common divisor gcd easy verify smallest integer let claim constructs linear code satisfies ccp since required field size claim lower mds code considered section dimension also satisfies claim consider linear block code whose generator matrix specified constructions prime prime power discuss constructions prime prime power attempt construct matrices ring mod case issue somewhat complicated fact square matrix mod linear independent rows determinant unit ring general fact makes harder obtain constructions vandermonde matrix satisfies claim exploit vandermonde structure matrices specifically difference units ring guaranteed unit however still provide constructions observed claim claim hold linear block codes mod use without proof subsection let denote code mod let codeword obtained follows component therefore qdkd codewords also evident cyclic discussed section form matrix codewords turns using technique discussed section obtain resolvable design furthermore gain system delivery phase shown kmin min discuss points detail appendix section claim let generator matrix single parity check spc code entries mod satisfies used base matrix claim proof hard see submatrix determinant unit mod thus result holds case iscussion omparison xisting chemes discussion claim following matrix entries mod satisfies number users cache fraction shown theorem gain therefore gain subpacketization level increase larger thus approach given subpacketization budget highest coded gain obtained denoted gmax kmax kmax largest integer kmax exists kmax linear block code satisfies ccp determining kmax characterize collection values exists linear code satisfies ccp mod use proposed constructions mds code claim claim claim claim claim claim purpose call collection generate algorithm note entirely possible linear block codes fit appropriate parameters outside scope constructions thus list may exhaustive addition note check working provide operating points proof proved following arguments proof claim treating elements mod setting need consider three different submatrices need check property correspond simpler instances submatrices considered types iii proof claim particular corresponding determinants always units mod remark note general construction claim potentially fail case matrices mod one cases consideration specifically type iii case determinant depends difference values difference units mod guaranteed unit thus guarantee determinant unit remark use claim obtain higher values based two classes linear block codes mod example consider caching system users cache fraction suppose subpacketization budget checking construct see table result kmax maximal coded gain achieve gmax contrast scheme achieve coded gain requires subpacketization level achieve almost rate performing using scheme example particular divide file size two smaller files size constructions cyclic codes work constructing cyclic codes mod specifically provides construction prime begin outlining construction chinese remainder theorem element mod unique representation terms residues modulo let mod denote map suppose cyclic codes exist individual code denoted construction notes spc code claim claim claim claim spc code claim extend spc code code claim generator polynomial spc code claim extend spc code code spc code claim extend spc code code spc code claim extend spc code code generator polynomial table ist values xample values obtained following lgorithm scheme applied size separately corresponding corresponding thus overall cache fraction overall coded gain scheme however subpacketization level fsm much greater subpacketization budget fig present another comparison system parameters different values scheme works integer fig plots markers corresponding values scheme achieves ease presentation rate left logarithm subpacketization level right shown plot present results corresponding two construction techniques spc code smaller spc code coupled claim seen subpacketization levels several orders magnitude smaller small increase rate comparison general parameters discussed next discussion shall use superscript refer rates subpacketization levels proposed scheme fig comparison rate subpacketization level system users left shows rate right shows logarithm subpacketization level green blue curves correspond two proposed constructions note schemes allow multiple orders magnitude reduction subpacketization level expense small increase coded caching rate determine suppose given given rate achieved memory sharing scheme corner points satisfy following equations comparison within scheme tkn least integer see consider following argument suppose statement true exists scheme operates via memory sharing points kkm note convexity conclude convex hull corner points tikn integer subpacketization level kmi addition note function convex parameter verified simple second derivative calculation first argue fsm lower bounded obtained follows given first unfortunately analytically becomes quite messy yield much intuition instead illustrate reduction subpacketization level numerical comparisons algorithm construction algorithm input prime power gcd gcd exists cyclic code satisfies condition claim corresponding codes constructed using claim else corresponding codes constructed spc code claim claim claim else corresponding codes constructed mds code claim else corresponding codes constructed claim claim end corresponding codes constructed claim claim end end end end end end prime power corresponding codes constructed claim claim claim claim end end end output example consider linear block code generator matrix specified checked satisfies thus corresponds coded caching system users scheme achieves point hand numerically solving obtain therefore much higher similar calculation shows thus also least large therefore still much higher next set comparisons proposed schemes literature note several restrictive parameters allow comparison comparison denote fsm rate subpacketization level scheme respectively rate comparison note comparison subpacketization level following results claim following results hold lim results obtained lim expressions represents binary entropy function proof results simple consequences approximating derivations found appendix next compare lower bound fsm subpacketization level proposed scheme principle solve system equations obtain appropriate values similar hard see exponentially lower thus rate higher subpacketization level exponentially lower thus gain scaling exponent respect scheme depends choice fsm positive integers positive integers scheme special case scheme second scheme let shows means better emphasize results require somewhat restrictive parameter settings finally consider work work leveraged results arrive coded caching schemes subpacketization linear specifically show constant exists scheme rate chosen arbitrarily small choosing large enough theoretical perspective positive result indicates regimes linear subpacketization scaling possible however results valid value large specifically result asymptotic parameter parameter ranges result clearly better compared work scaling exponent gain fig plot shows gain scaling exponent obtained using techniques different value curve corresponds choice value fig plot value different values plot assumes codes satisfying ccp found rates corresponds gain scheme case subpacketization level exponentially smaller respect presented result recovered special case work theorem linear block code chosen single parity check code mod specific case need prime power thus results subsume results recent preprint referencem proposed caching system corresponding rate min onclusions uture ork work demonstrated link specific classes linear block codes subpacketization problem coded caching crucial approach consecutive column property enforces certain consecutive column sets corresponding generator matrices fullrank present several constructions matrices cover large range problem parameters leveraging approach allows construct families coded caching schemes subpacketization level exponentially smaller compared approach several opportunities future work even though subpacketization level significantly lower still scales exponentially number users course rate growth number users much smaller recent results coded caching schemes demonstrate existence schemes subpacketization scales number users would interesting investigate whether ideas leveraged obtain schemes work practical systems tens hundreds users positive integers min precise comparison somewhat hard compare schemes certain parameter choices also considered let corresponds coded caching system comparison scheme keep transmission rates schemes roughly let assume corresponding linear block code exists better hand let obtain coded caching system keeping eferences niesen fundamental limits caching ieee trans info vol may ghasemi ramamoorthy improved lower bounds coded caching ieee trans info vol sengupta tandon clancy improved approximation tradeoff caching via new outer bounds ieee intl symposium info niesen decentralized coded caching attains tradeoff trans vol niesen coded caching nonuniform demands ieee trans info vol feb hachem karamchandani diggavi coded caching ieee intl symposium info tang ramamoorthy coded caching networks resolvability property ieee intl symposium info rates let regime subpacketization level typically lower work proposed caching schemes paramem ters exists relatively prime unit ring mod note wong tulino llorca caire effros langberg fundamental limits caching combination networks ieee international workshop signal processing advances wireless communications spawc june ghasemi ramamoorthy asynchronous coded caching ieee intl symposium info niesen coded caching content ieee intl conf fitzpatrick sector disk drives transitioning future advanced format technologies toshiba white paper online available http shanmugam tulino llorca dimakis finite length analysis coded multicasting annual allerton conference communication control computing sept shanmugam tulino llorca analysis coded multicasting ieee trans info vol yan cheng tang chen placement delivery array design centralized coded caching scheme ieee trans info vol shangguan zhang centralized coded caching schemes hypergraph theoretical approach preprint online available https shanmugam tulino dimakis coded caching linear subpacketization possible using graphs ieee intl symposium info june olmez ramamoorthy fractional repetition codes flexible repair combinatorial designs ieee trans info vol tripathy ramamoorthy capacity different message alphabets ieee intl symposium info incidence structures construction capacity analysis ieee trans info appear stinson combinatorial designs construction analysis springer yan tang chen cheng placement delivery array design strong edge coloring bipartite graphs ieee communications letters appear roth introduction coding theory cambridge university press lin costello error control coding prentice hall dummit foote abstract algebra wiley blake codes certain rings information control vol graham knuth patashnik concrete mathematics foundation computer science professional alon moitra sudakov nearly complete graphs decomposable large induced matchings applications proc annual acm symposium theory computing stoc horn johnson topics matrix analysis cambridge university press gab consider ring mod rewrite gab arbitrary equation unique solution since unit mod implies distinct solutions mod using chinese remainder theorem solutions mod result follows remark lemma easily verified linear block code mod construct resolvable block design one following conditions column generator matrix satisfied least one entry unit mod entries zero divisors greatest common divisor spc code mod entries generator matrix unit therefore construction always results resolvable design proof lemma first show proposed delivery scheme allows user demand satisfied note claim shows user parallel class belongs recovery set recovers missing subfiles specified superscript thus need show signals generated according claim recovery set done equivalent showing bipartite recovery set graph parallel class degree multiple edges nodes disallowed towards end consider parallel class claim exist exactly solutions integer values equation existence solution equation follows division algorithm note rhs furthermore note solutions would imply contradiction shows parallel class participates least different recovery sets following facts follow easily construction recovery sets degree recovery set bipartite graph multiple edges recovery set parallel class ppendix resolvable design mod lemma linear block code mod generator matrix gab construct resolvable block design procedure section gcd proof assume prime prime power gcd evident gcd either prime prime power follows disallowed therefore total number edges bipartite graph parallel class participates least recovery sets argument participates exactly recovery sets finally calculate rate delivery phase total equations symbol server transmits transmitted size subfile thus rate matrices full rank respectively upper triangular lower triangular entries diagonal cyclic code therefore full rank full rank partitioned similar form result claim follows proof claim proof claim matrix shown need argue submatrices gsa full rank follows argue submatrices full rank proof gsa similar note written compactly follows using kronecker products expression subsequent discussion set claim cyclic code generator matrix satisfies ccp submatrices full rank follows argue true note generator matrix cyclic code consecutive columns linearly independent therefore full rank without needing conditions claim rewriting block form get first columns last column respectively likewise first components last component matrices obtained deleting column respectively using schur determinant identity det det det next check determinant submatrices obtained deleting column let block form resultant matrix expressed det det det det holds properties kronecker product holds since next note det denoted matrix next define matrix matrix type easy verify submatrix full rank since form generator matrix spc code type form case suppose delete first columns set columns depicted underbrace say column let resultant matrix expressed follows another application schur determinant identity yields det det det form note since det det columns vandermonde form columns correspond distinct elements therefore note however discussion focused argument needs apply gsa need verify full rank need check full rank full rank checking full rank simplified follows move corresponding column first entry first column following consider obtained deleting column proof claim note matrix generator matrix linear block code since coprime least positive integer show satisfies ccp need argue submatrices gsa full rank easy check verify three types matrix gsa follows iii matrix follows matrix follows case det since matrix full rank det gsa gsa full rank case deleting column gsa proof resultant matrix full rank similar case omit case deleting column gsa resultant matrix follows gsa obtained deleting column matrix since det det therefore full rank type iii gsa form perform case analysis cases specified corresponding underbrace case deleting column gsa block form resultant matrix gsa expressed follows gsa form verify gsa full rank need check full rank owing construction check determinant following matrix zero matrices respectively det det case deleting last columns say column block form resultant matrix expressed follows case form respectively note verify gsa full rank need check determinant owing construction gsa case case case case case following matrix required full rank case deleting column gsa block form resultant matrix gsa expressed denotes submatrix obtained deleting column denotes submatrix obtained deleting column verify gsa full rank need check full rank since following form denotes row det det det deleting column matrix obvious det det gsa since det full rank det thus gsa full rank case deleting column gsa block form resultant matrix gsa expressed evidently det gsa gsa full rank proof claim let least integer first argue least integer assume true exists gsa gsa implies contradiction next argue satisfies ccp submatrices full rank let argue three cases case first column lies first columns suppose construction since submatrices full rank case first last column lie last columns suppose let hence submatrices full rank case first column lies last columns last column lies first columns suppose get let let let construction hence submatrices full rank using fact taking limits get fsm lim using fact taking limits get lim discussion coded caching systems constructed generator matrices satisfying consider definition let least integer let let submatrix specified columns gij demonstrate resolvable design generated linear block code satisfies also used coded caching scheme first construct resolvable design described section partitioned parallel classes constructed resolvable design partition subfile subfiles operate placement scheme algorithm delivery phase recovery set several equations generated benefit users simultaneously furthermore equations generated recovery sets recover proof claim missing subfiles section show recovery set possible generate equations fsm benefit users allow recovery missing subfiles given superscript subsequent discussion assumed condition evident system equations variables unique solution given vector since possible vectors result follows exactly mirrors discussion case skipped towards end first show picking users distinct parallel classes always form signals specifically consider blocks lij picked distinct parallel classes bij lij bij lij case form recovery set bipartite graph parallel classes recovery sets disjoint vertex subsets edges incident parallel class labeled arbitrarily parallel class denote label label given recovery set delivery phase proceeds choosing blocks distinct parallel classes provides equations benefit users note case randomly picking blocks parallel classes always result intersections different turns equation allows user recover missing subfile superscript label let demand user ubi formalize argument algorithm prove equations generated recovery set recover missing subfile superscript label claim consider resolvable design constructed linear block code satisfies ccp let set parallel classes corresponding emphasize consider blocks lij picked distinct parallel classes bij lij argument implies blocks distinct parallel classes points common blocks distinct parallel classes points common blocks users participate equations benefits users particular user recover missing subfile indexed element belonging intersection blocks equation similar argument lemma made justify enough equations found allow users recover missing subfiles algorithm signal generation algorithm proof recall construction section block specified follows pick blocks distinct parallel classes cardinality intersection always let gab consider lij picked distinct parallel classes assume let denote submatrix obtained retaining rows show vector column appears times note vectors linearly independent thus subset vectors linearly independent assume top submatrix matrix next consider system equations variables find set determine missing subfile indices user recover note add signals sig user ubs demands file equation allows recover corresponding missing subfile index element superscript determined recovery set bipartite graph end output signal set sig sake convenience argue user demands recover missing subfiles superscript note thus user needs obtain missing subfiles superscript delivery phase scheme repeatedly picks users different parallel classes equations algoe rithm allow recover input label signal set sig user recover missing subfiles superscript pick blocks claim next count number equations participates pick users parallel classes totally ways pick generate equations thus total equations user participates remains argue equation provides distinct file part user towards end let index set note pick set blocks impossible recovered subfiles codeword cyclic code thus component mapped let gab represent generator matrix code based prior arguments evident qiki pki distinct solutions equation gab turn implies appears qdkd times row result follows next show blocks distinct parallel classes kmin qdkd intersections kmin sakmin akmin akmin akmin kmin towards end consider sakmin lij picked distinct parallel classes kmin assume let denote submatrix obtained retaining rows show vector column appears qdkd times let lij represent component map consider cyclic code system equations variables lie since points distinct next suppose exist sets blocks bij lij bij contradiction since turn implies bij impossible since bij lij two blocks parallel class empty intersection finally calculate transmission rate algorithm recovery set transmit equations totally recovery sets since equation size equal subfile rate given linear block codes satisfy correspond coded caching system thus rate rate system little higher compared ccp system almost subpacketization level however comparing definitions evident rank constraints weaker compared therefore general find instances generator matrices satisfy example large class codes satisfy cyclic codes since consecutive columns generator matrices linearly independent thus cyclic codes always satisfy satisfy satisfy additional constraints discussed claim arguments identical made claim seen system equations solutions applying argument cyclic codes conclude vector appears qdkd times result follows cyclic codes mod first show matrix constructed constructed approach outlined section still results resolvable design let codeword cyclic code mod denoted prime using chinese remaindering map discussed section uniquely mapped codewords tang received degree mechanical engineering beihang university beijing china degree electrical information engineering beihang university beijing china currently working towards degree department electrical computer engineering iowa state university ames usa research interests include network coding channel coding aditya ramamoorthy received degree electrical engineering indian institute technology delhi degrees university california los angeles ucla respectively systems engineer biomorphic vlsi data storage signal processing group marvell semiconductor inc since fall electrical computer engineering department iowa state university ames usa research interests areas network information theory channel coding signal processing bioinformatics nanotechnology ramamoorthy served editor ieee transactions communications currently serving associate editor ieee transactions information theory recipient early career engineering faculty research award iowa state university nsf career award professorship
7
journal machine learning research submitted published constrained policy improvement dec boris belousov jan peters belousov peters department computer science technische darmstadt ias hochschulstr darmstadt germany editor abstract ensure stability learning generalized policy iteration algorithms augment policy improvement step trust region constraint bounding information loss size trust region commonly determined divergence captures notion distance well also yields solutions paper consider general class derive corresponding policy update rules generic solution expressed derivative convex conjugate function includes solution special case within class focus family study effects choice divergence policy improvement previously known well new policy updates emerge different values show every type policy update comes compatible policy evaluation resulting chosen interestingly bellman error minimization closely related policy evaluation pearson penalty divergence results policy update critic carry asymptotic analysis solutions different values demonstrate effects using different divergence functions bandit problem common standard reinforcement learning problems keywords reinforcement learning policy search bandit problems introduction many art reinforcement learning algorithms including natural policy gradients kakade bagnell schneider peters trust region policy optimization trpo schulman relative entropy policy search reps peters impose divergence constraint successive policies parametric policy iteration avoid large steps towards unknown regions state space similar objective functions term proposed context optimal control kappen todorov inverse reinforcement learning ziebart approaches objective function form free energy therefore viewed performing free energy minimization still precup paper explore implications using generic constrain policy improvement objective function case resembles free energy objective boris belousov jan peters license see https attribution requirements provided http belousov peters commonly encountered variational methods wainwright jordan divergence replaced idea using penalties general problems goes back early work teboulle generalizations useful providing unified view existing algorithms statistical inference seen minimization altun smola various message passing algorithms understood minimizing different minka way new algorithms better suited particular generative adversarial networks gans goodfellow minimize particular divergence measuring similarity images nowozin paper knowledge first shed light policy improvement reinforcement learning point view background section provides background convex conjugate function highlighting key properties required derivations morimoto ali silvey generalizes many similarity measures probability distributions sason verdu two distributions finite set defined convex function example divergence corresponds fkl log note must absolutely continuous respect avoid division zero neyman pearson implies ally assume continuously differentiable includes cases interest ized unnormalized distributions example generkl alized divergence zhu rohwer corresponds reverse log derivations paper hellinger distance fit employing unnormalized distributions subsequently parameter imposing normalization condition constraint figure chernoff amari smoothly connects several parameter family generated prominent divergences particular choice family functions motivated generalization natural logarithm cichocki amari power function turns natural logarithm replacing natural logarithm derivative divergence log integrating condition yields constrained policy improvement divergence reverse pearson neyman hellinger log log log dom log table function convex conjugate derivatives values generalizes divergence reverse divergence hellinger distance pearson neyman reverse pearson figure displays points parabola every divergence reverse divergence symmetric respect point corresponding hellinger distance convex conjugate defined angle brackets denote dot product boyd vandenberghe key property relating derivatives yields table lists common functions together convex conjugates derivatives general case convex conjugate derivative given function convex attains minimum function positive domain function property linear inequality constraint dom follows requirement dom another result convex analysis crucial derivations fenchel equality arg occasionally put conjugation symbol bottom especially derivative conjugate function policy improvement bandits order develop intuition regarding influence entropic penalties policy improvement first consider simplified version reinforcement learning stochastic bandit problem bubeck resulting algorithm closely related family algorithms auer extremely good adversarial bandit problems fallen behind stochastic bandit problems nevertheless focus stochastic bandit problems illustrate crucial effects entropic penalties policy improvement clean intuitive way section meant serve foundation reinforcement learning section every time step agent chooses among actions every choice receives noisy reward drawn distribution belousov peters mean goal agent maximize expected total reward agent knew true value action optimal strategy would always choose action highest value arg maxa however since agent know advance faces dilemma every time step either pick best action according current estimate action values try exploratory action may turn better arg max generic way encode introducing policy distribution agent draws action thus question becomes given current policy current estimate action values policy next time step unlike choice best action perfect information sampling policies hard derive first principles ghavamzadeh section formalize choosing next stochastic policy maximize expectation regularizing change policy formalization always yields convex optimization problem solution next stochastic policy obtain classical policy updates softmax etc well novel ones resulting different empirical evaluations simulated bandit environment illustrate differences policy updates corresponding various penalty functions constrained policy optimization problem bandits policy improvement time step derived maximizing expected return punishing policy change old policy new policy kind regularization regularization added objective function weighted temperature exploration exploitation simplicity current estimate action values denoted policy achieves higher expected return solution following optimization problem max different formulations problem divergence place proposed inequality constraint peters bounding information loss certain fixed amount automatically yield lagrange multiplier found optimization alternating maximization expected return fixed information loss subsequent information loss minimization constant expected return still precup yields algorithms allow automatic adaptation divergence added penalty suggested azar kappen since treatment three mathematical problems differs sets hyperparameters results straightforwardly generalize two formulations constrained policy improvement differentiating lagrangian problem respect equating derivative zero find expression optimal new policy derivative convex conjugate function substituting back lagrangian using fenchel equality arrive dual problem min arg rangex constraint argument convex conjugate particularly simple linear form see section necessary background special divergence case completely disappears range see table divergences bandit scenario one free choose whether solve optimization problem primal form dual form however see later dual formulation much better suited reinforcement learning lagrange multipliers disappear choosing function important consider whether dom implies improved policy put zero mass actions although required finite value definition many divergences finite limit divergence however even functions defined zero necessarily finite derivative zero log dom new policy put zero probability mass actions possible old policy words complementary slackness condition implies therefore ignore constraint singularity thereby one commonly omits constraint using divergence reverse general hand dom actions may get zero probability new policy case emerges effects divergence type parameter controls step size divergence type affects search direction greedy policy change effects temperature parameter independent chosen temperature parameter controls magnitude deviation new policy old one optimal lagrange multiplier plays belousov peters role baseline similar policy gradient methods sutton peters schaal low temperatures policy arg maxa tends towards greedy action selection sutton barto pwith maxa policy baseline longer change values agent chooses intermediate solution focusing single best action remaining close old policy effects policy improvement given functional form policy update find lagrange multipliers thus obtain solution problem two cases exponential linear corresponds solve optimization problem numerically table summarizes table effects effects policy improvement recover policy greedy action selection method sutton barto policy assigns probability best action spreads remaining greedy probability mass uniformly among rest parameter tional see section values correspond cies sutton barto parameter controls distribution linear probability mass among suboptimal actions large negative values correspond uniform distributions whereas smaller elimination distribute probability mass according values arrive form policy bridle baseline log exp regular arises uniform distribution arms policy results using divergence measure distance suggested several times literature azar kappen peters still precup increasing obtain linear policy pointed section actions may get zero probability therefore general one needs keep track lagrange multipliers however simplify analysis dropping sufficiently large indeed exists temperature thus inequality constraints inactive minimum temperature mina defined terms advantage function constrained policy improvement measures agent gain taking action instead following policy baseline equals expected return policy thus obtain linearly policy obtain new policy call policy actions get equal probability except set worst actions gets probability zero size set depends qmax qmin effects baseline complete picture describe asymptotic behavior divergence type varied consider limit large negative established section lagrange multipliers vanish sufficiently large solution hits inequality constraint lower bound table effects baseline obtained given holds also holds maximum since solution lies constraint limit one obtains lim lim max max baseline qmax qmax qmin qmin recall section fixed baseline shifts towards interior feasible set temperature increased therefore optimal baseline divergence lies range qmax analysis yields qmin table summarizes asymptotic results empirical evaluation policy improvement bandits effects divergence type demonstrated simulated stochastic bandit problem policy improvement single iteration well regret period time compared different values regret effects policy improvement figure shows effects policy update consider bandit problem arm values temperature kept fixed values initial policy uniform arm values fixed several iterations shown figure comparison extremely large positive negative values correspond policies respectively small values contrast weigh actions according values policies ucb peaked eventually turning politime step cies policies uniform figure average regret put zero mass bad actions eventually turning various values elimination policies policy iteration may spend lot time end deciding two best actions whereas final convergence faster belousov peters arm values action probabilities iteration iteration iteration iteration iteration arm number arm number figure effects policy improvement row corresponds fixed first four iterations policy improvement together iteration shown row large positive eliminate bad actions one one keeping exploration level equal among rest small weigh actions according values actions low value get zero probability remain possible small probability large negative focus best action exploring remaining ones equal probability regret shown figure effects regret average regret nqmax different values function time step confidence error bars also plot performance ucb algorithm bubeck comparison presented results obtained bandit environment rewards gaussian distribution variance arm values estimated observed rewards policy updated every time steps temperature parameter decreased starting every policy update according schedule results averaged runs general extreme accumulate regret however eventually focus single action flatten small accumulate less regret may keep exploring actions longer values perform comparably ucb around steps reliable estimates values obtained steps steps figure shows average regret given number steps steps time steps function divergence type best results achieved small values large negative correspond policies oftentimes prematurely converge action failing discover optimal action long time exploration probability small large positive correspond policies may mistake parameter completely eliminate best action spend lot time figure regret ciding two options end learning accumulating fixed time function constrained policy improvement regret optimal value depends time horizon one wishes optimize policy minimum curves shifts slightly negative towards range increasing time horizon policy iteration ergodic mdps methodology described previous section generalized bandit scenario reinforcement learning focus setting sutton barto every time step agent finds state takes action drawn stochastic policy subsequently agent transitions state state probability receives reward fraction time agent spends average state result following policy given steady state distribution satisfies stationarity condition steady state distribution sometimes denoted emphasize dependence policy agent seeks policy maximizes expected return subject constraint provided probability distributions constrained policy optimization ergodic mdps generalize bandit problem adding single additional constraint namely stationarity condition constraint directly yields value function peters policy evaluation step corresponding dual variable however straightforward lagrangian rewrite terms policy state distribution assuming distribution old policy aim stay similar new distribution achieving higher expected return obtain following optimization problem max similar lagrangian problem obtained value function arises additional lagrangian multiplier belousov peters lagrangian problem written function dual variables corresponding stationarity constraint note identified value function advantage function solution problem expressed function dual variables allows extracting optimal next policy bayes rule optimal values dual variables found solving dual problem min arg rangex constraint arg linear derivative convex function monotone boyd vandenberghe problem altogether convex optimization problem therefore solved efficiently generic policy iteration algorithm employing penalty proceeds two steps policy evaluation dual problem solved given policy improvement analytic solution primal problem used update following subsection consider several particular choices divergence function detail two important special cases similar bandit scenario see section two special choices divergence analytically find lagrange multipliers thus obtain policy evaluation policy improvement steps simplest form divergence policy evaluation turns min log exp constrained policy improvement lagrangian upon substitution variables except remarkably baseline equals lagrangian dual function true divergence case policy improvement step takes form exp exp note baseline cancels policy would included distribution pearson divergence one needs careful dual variables ensuring probabilities necessarily zero see section notational simplicity employ differential advantage defined difference advantage expected return following sutton barto straightforward check temperature bigger absolute value minimum differential advantage mins constraint satisfied baseline turns average return becoming independent contrast divergence case policy evaluation step corresponds minimization squared differential advantage scaled min policy improvement step corresponds linear differential advantage reweighting old policy careful reader shall notice solution pearson obtained approximation divergence solution limit turns even general statement holds solution turns pearson solution limit high temperatures small policy update steps underlying reason pearson quadratic approximation around unity since pushing towards infinity puts weight divergence penalty optimization objective distance old distribution new one becomes smaller justifying taylor expansion therefore one expect see pronounced differences various penalties either big update steps approximations value function policy used belousov peters practical algorithm policy iteration solving problem computing optimal next policy would require full knowledge environment practice reinforcement learning agent would need estimate experience prohibitively expensive large spaces fortunately rephrase algorithm entirely terms sample averages thus model estimation replace expectations sample averages using transitions gathered current policy agent need visit information current policy suffices compute averages algorithm summarizes main steps constrained policy iteration function approximation dimensionality state space large one resort function approximation methods note relaxing steady state constraint steady features constraint naturally introduces linear function approximation fixed features parameters new lagrangian multipliers peters knowledge constraint way introduce function approximation way primal optimization problem next policy dual optimization problem implementation details several practical improvements make algorithm faster robuster first temperature parameter decayed iterations order guarantee convergence solution original problem experiments simple exponential decay sufficient however complex adaptive schemes devised second one omit dual variables equal zero see section moreover one omit even sufficiently big exact value ignored depends scale rewards special case explicit condition see section third estimate advantage actions sampled current policy set baseline serves proxy expected reward experiments strategy performed best fourth introducing little slack linear constraint domain convex conjugate function improves robustness helps optimizer empirical results ergodic mdps evaluate policy iteration algorithm standard reinforcement learning problems openai gym brockman environments terminate absorbing states restarted data collection order ensure ergodicity figure demonstrates learning dynamics different environments various choices divergence function parameter settings implementation details found appendix summary one either promote risk averse behavior choosing may however result suboptimal exploration one constrained policy improvement algorithm policy iteration type input initial policy divergence function temperature define dual function every sample every pair return foreach policy update sampling gather current policy policy evaluation minimize dual function defined arg minv rangex policy improvement every pair output optimal policy promote risk seeking behavior may lead overly aggressive elimination options experiments suggest optimal balance found range pointed effect policy iteration linear symmetric respect contrary one could expected given symmetry function pointed section switching may little effect policy iteration whereas switching may much pronounced influence learning dynamics relation mean squared bellman error minimization seen section policy evaluation pearson penalty corresponding dual optimization problem equivalent minimization mean squared differential advantage msda establish relation msda objective classical mean squared bellman error msbe minimization sutton barto order state precisely msda msbe objectives related need recall several definitions assume value function parameterized vector discussed section let agent follow policy yields distribution discounted infinite horizon setting one defines temporal difference error discount factor setting error typically replaced differential error belousov peters chain expected reward cliffwalking frozenlake iteration figure effects policy iteration row corresponds given environment results different values split three subplots within row extreme left refined values right cases negative values initially show faster improvement immediately jump mode keep exploration level low however certain umber iterations get overtaken moderate values weigh advantage estimates evenly positive demonstrate high variance learning dynamics clamp probability good actions zero advantage estimates overly pessimistic never able recover mistake large positive may even fail reach optimum altogether exemplified plots stable reliable lie reverse hellinger distance outperforming frozenlake environment differs usual error two ways first discount factor second expected reward current policy subtracted order ensure mean differential error zero constrained policy improvement following consider differential versions errors averaging next states differential error one obtains differential advantage integrating action differential advantage one obtains differential bellman error total three different error signals differential error dtde differential advantage differential bellman error dbe used learning creating mean squared objective obtain three objective functions msdtde msda msdbe thus novel msda objective fits error bellman error minimization performing stochastic gradient descent sgd objectives leads various flavors residual gradient algorithm baird recently surveyed dann curiously three objectives result naive residual gradient algorithm sutton barto expectations naively replaced estimates realize dual exactly msda objective interestingly since dual objective tends towards msda objective limit high temperatures choice divergence function dual minimization viewed continuous generalization msda minimization finite temperatures therefore sgd dual objective continuous generalization algorithm finite temperatures conclusion paper developed framework deriving new established policy improvement policy iteration algorithms constraining policy change using divergence first described key idea bandit scenario looking smooth curve space able see connection many classical action selection strategies well discover several new ones optimal choice depends time horizon since different corresponds different exploration strategies policy update rule employed belousov peters algorithm bubeck gradient bandit algorithm sutton barto special case constrained policy improvement corresponding divergence bandit case considered reinforcement learning problem found efficient solution important new insight choice imply type policy improvement step also compatible policy evaluation step particular shown employing policy actor implies function used policy evaluation critic somewhat similar reps peters similarly framework policy evaluation minimizing mean squared differential advantage closely related mean squared bellman error results choice comes along new compatible policy update rule thus likely approach still yield larger number new policy improvements along compatible critics derived choosing another acknowledgments project received funding european union horizon research innovation programme grant agreement constrained policy improvement appendix experiments temperature parameter exponentially decayed iteration choice depends scale rewards number samples collected per policy update tables environment list parameters along number samples per policy update number policy iteration steps number runs averaging results applicable settings also listed parameter value number states action success probability small large rewards number runs number iterations number samples temperature parameters table chain environment parameter value punishment falling cliff reward reaching goal number runs number iterations number samples temperature parameters table cliffwalking environment parameter value action success probability number runs number iterations number samples temperature parameters table frozenlake environment belousov peters references ali samuel daniel silvey general class coefficients divergence one distribution another journal royal statistical society series methodological issn yasemin altun alex smola unifying divergence minimization statistical inference via convex duality learning theory issn doi amari methods statistics springer new york doi peter auer nicolo yoav freund robert schapire bandit problem siam journal computing mohammad gheshlaghi azar bert kappen dynamic policy programming journal machine learning research issn andrew bagnell jeff schneider covariant policy search ijcai pages leemon baird residual algorithms reinforcement learning function approximation proceedings international conference machine learning july issn doi stephen boyd lieven vandenberghe convex optimization volume isbn doi john bridle training stochastic model recognition algorithms networks lead maximum mutual information estimation parameters nips isbn greg brockman vicki cheung ludwig pettersson jonas schneider john schulman jie tang wojciech zaremba openai gym arxiv preprint bubeck regret analysis stochastic nonstochastic bandit problems foundations trends machine learning issn doi herman chernoff measure asymptotic efficiency tests hypothesis based sum observations annals mathematical statistics pages andrzej cichocki amari families divergences flexible robust measures similarities entropy issn doi imre eine informationstheoretische ungleichung und ihre anwendung auf den beweis der von markoffschen ketten publ math inst hungar acad constrained policy improvement christoph dann gerhard neumann jan peters policy evaluation temporal differences survey comparison journal machine learning research volume pages isbn mohammad ghavamzadeh shie mannor joelle pineau aviv tamar bayesian reinforcement learning survey foundations trends machine learning issn doi ian goodfellow jean mehdi mirza bing david sherjil ozair aaron courville yoshua bengio generative adversarial nets nips isbn sham machandranath kakade natural policy gradient nips pages isbn doi hilbert kappen path integrals symmetry breaking optimal control theory journal statistical mechanics theory experiment issn doi tom minka divergence measures message passing technical report microsoft research tetsuzo morimoto markov processes journal physical society japan sebastian nowozin botond cseke ryota tomioka training generative neural samplers using variational divergence minimization nips pages jan peters stefan schaal policy gradient methods robotics intelligent robots systems international conference pages ieee jan peters sethu vijayakumar stefan schaal reinforcement learning humanoid robotics humanoids doi jan peters katharina yasemin altun relative entropy policy search aaai pages igal sason sergio verdu inequalities ieee transactions information theory issn doi john schulman sergey levine michael jordan pieter abbeel trust region policy optimization icml isbn doi susanne still doina precup approach reinforcement learning theory biosciences issn doi richard sutton andrew barto reinforcement learning introduction mit press cambridge belousov peters richard sutton david mcallester satinder singh yishay mansour policy gradient methods reinforcement learning function approximation advances neural information processing systems pages marc teboulle entropic proximal mappings applications nonlinear programming mathematics operations research issn doi emanuel todorov markov decision problems nips isbn martin wainwright michael jordan graphical models exponential families variational inference foundations trends machine learning issn doi huaiyu zhu richard rohwer information geometric measurements generalisation technical report aston university brian ziebart andrew maas andrew bagnell anind dey maximum entropy inverse reinforcement learning aaai pages
2
eulerian graded jul linquan wenliang zhang abstract let field arbitrary characteristic ring differential operators inspired euler formula homogeneous polynomials introduce class graded called eulerian graded proved vast class including local cohomology modules hjiss homogeneous ideals eulerian application theory eulerian graded prove socle elements local cohomology module hjiss must degree characteristic answers question raised also proved graded eulerian hence main result recovered application theory eulerian graded graded injective hull homogeneous prime ideal discussed well introduction let polynomial ring indeterminates field standard grading deg deg nonzero let denote order differential operator respect ring differential operators note characteristic weyl algebra ring differential operators natural given deg deg deg nonzero classical euler formula homogeneous polynomials says deg homogeneous polynomial inspired euler formula introduce class called eulerian graded graded whose homogeneous elements satisfy series higher order euler formulas definition one main results concerning eulerian graded following proved section section theorem let homogeneous ideals eulerian let graded injective hull eulerian local cohomology module hjiss eulerian application theory eulerian graded following result local cohomology proved section theorem let notations previous theorem socle elements hjiss must degree consequently hjs isomorphic graded direct sum copies result particular gives positive answer question stated recovers main theorem paper organized follows section eulerian graded defined arbitrary field basic properties modules discussed section section consider second author partially supported nsf grant dms eulerian graded characteristic characteristic respectively particular show section graded introduced eulerian section apply theory eulerian graded local cohomology modules theorem proved section finally section application theory graded injecitve hull homogeneous prime ideal considered finish introduction fixing notation throughout paper follows denotes polynomial ring indeterminates field order differential operator respect denoted denotes ring differential operators follows corollary every element may uniquely written linear combinations monomials natural given deg deg deg nonzero evident graded graded left compatible natural given graded module denotes degree shifted irrelevant maximal ideal denoted graded injective hull denoted may identified space basis natural structure given xenn xej xenn degree example note integer nonnegative integer use denote number still char graded effect element acknowledgement authors would like thank professor gennady lyubeznik carefully reading preliminary version paper valuable suggestions improved paper considerably authors also grateful anonymous referee comments paper eulerian graded section introduce eulerian graded discuss basic properties begin following definition definition euler operator denoted defined xinn particular usual euler operator graded called eulerian homogeneous element satisfies deg every start easy lemma lemma positive integers proof easy check characteristic following lemma special case proposition needed sequel lemma positive integers min xti following proposition indicates connection among euler operators proposition every rer proof lemma know xjj xinn xjj xjj xinn lemma xjj xinn xinn xinn xjj xinn rer rer remarks order remark eulerian graded eulerian equivalently graded eulerian graded one characteristic trivial characteristic take give short argument characteristic positive theorem lucas decompositions particular apply get every hence negative look decomposition taking get hence every therefore suffices show must sign pick direct computation using theorem lucas gives contradiction definition depend characteristic however see section characteristic need consider usual euler operator eulerian proof goes follows monomial xjnn arbitrary integers allow negative integers xjnn xinn xjnn xjnn xjnn explain last equality use induction argument clear suppose equality second equality induction hypothesis last equality identity see clearly preserve addition since xjnn clearly degree deg see computation every homogenous therefore eulerian dpis clear identity homogeneous degree one main results section check whether graded eulerian suffices check whether element set homogeneous generators satisfies theorem let graded assume set homogeneous eulerian satisfies euler formula every proof eulerian clear satisfies euler formula every assume satisfies euler formula every wish prove eulerian end suffices show homogeneous element satisfies euler formula every xsnn clear suffices consider xsi without loss generality may assume prove two steps first consider finish first step may replace second step finish proof first use induction show satisfies euler formula compute deg general suppose know deg every xinn xinn xinn min xinn min deg deg deg xinn min xinn min deg xinn xinn deg deg deg deg follows lemma min follow lemma obtained setting true induction follows identity page completes first step next consider xinn xinn xinn xinn xinn xinn deg deg deg deg finishes second step case consider easy induction may assume satisfies deg deg second equality case assume satisfies completes proof theorem immediate consequence theorem cyclic following proposition let homogeneous left ideal eulerian xinn every proof according theorem eulerian satisfies euler formula since degree satisfies euler formula holds every proposition graded eulerian graded submodule graded quotient proof let graded submodule since homogeneous element also homogeneous element clear also eulerian given surjection homogeneous element homogeneous degree hence every deg deg proves also eulerian end section following result one key ingredients application local cohomology theorem graded eulerian graded eulerian proof remark suffices show eulerian graded clear eulerian remark since spanned xjnn computation remark clear eulerian graded element xjnn degree eulerian graded characteristic throughout section field characteristic section collect properties eulerian graded char main result graded eulerian one ingredients application local cohomology section first observe characteristic homogeneous element graded satisfies instead eulerian proposition let graded deg every homogeneous element eulerian deg proof prove induction every exactly deg deg given suppose know proposition know rer rer deg deg deg deg deg deg deg second equality uses induction hypothesis finishes proof remark seen lemma quite useful char unfortunately longer case characteristic instance mod link via lemma one reasons treat characteristic characteristic separately two different sections eulerian corollary let homogeneous left ideal proof clear proposition proof proposition proposition eulerian graded homogeneous multiplicative system particular eulerian homogeneous polynomial proof proposition suffices show homogeneous deg compute deg deg deg deg deg finishes proof remark turns eulerian graded stable extension following short exact sequence graded multiplication map since dim eulerian finitely generated even cyclic eulerian graded may holonomic rather straightforward check finitely generated eulerian graded holonomic see section characteristic vast class graded namely local cohomology modules eulerian holonomic eulerian graded characteristic throughout section field characteristic section prove eulerian preserved localization proof quite different characteristic also show graded always eulerian graded enable recover main result section proposition eulerian graded homogeneous multiplicative system particular eulerian homogeneous polynomial proof first notice tells every particular multiply numerator denominator large power write max homogeneous deg deg deg deg deg deg last equality characteristic finishes proof recall definition graded follows definition definitions integer let denote left whose right structure given equipped isomorphism called graded graded remark clear definition map induced also isomorphism induces structure specify induced structure suffices specify acts choose given element consider write define see details graded induced map also hence naturally graded turns graded eulerian graded theorem graded eulerian graded deg proof pick homogeneous element want show pick since graded isomorphism homogeneous fre assume deg deg deg deg deg deg particular every characteristic know deg deg finishes proof application local cohomology let arbitrary commutative noetherian ring ideal recall generated complex mfj mfj whose cohomology module hii map induced corresponding differential natural localization sign subset otherwise graded homogeneous ideal homogeneous elements graded differential complex natural localization follows cohomology module hjiss graded field characteristic proven using theory graded theorem theorem let polynomial ring field teristic homogeneous ideals local cohomology module hjiss isomorphic direct sum copies socle elements hjiss must degree natural question asked whether result holds characteristic using theory eulerian graded give proof result particular answer question characteristic affirmative begin following easy observation proposition let homogeneous ideals local cohomology module hjiss graded proof since natural localization map differential complex proposition follows immediately complex characterization local cohomology theorem let homogeneous ideals local cohomology module hjiss considered graded eulerian proof follows immediately propositions complex characterization local cohomology proposition proposition characteristic lemma page characteristic let isomorphism also proved page proof proven proposition characteristic lemma page characteristic map given isomorphism grading induced one hence deg since socle element degree follows deg therefore follows defines isomorphism proposition theorem characteristic lemma page characteristic let graded suppr graded proof graded hence also graded first claim socle generated homogeneous elements reason follows pick generator socle write sum homogeneous elements different degree every since killed hence every different degree therefore killed every hence killed socle proves claim also note since socle killed minimal homogeneous set generators actually homogeneous let homogeneous socle deg homomorphism sends copy map injective induces isomorphism socles supported injective graded supported since map socles isomorphism theorem let eulerian graded suppr isomorphic graded direct sum copies proof since supported know isomorphic graded proposition assumption eulerian follows theorem isomorphic graded direct sum copies finishes proof corollary let homogeneous ideals hjiss isomorphic graded direct sum copies equivalently socle elements hjiss must degree proof follows immediately theorems remark proven resp every hjiss holonomic resp finite resp characteristic resp characteristic therefore case know hjiss finite bass numbers theorem theorem hjiss follows corollary integer remarks graded injective hull homogeneous prime ideal seen theorem eulerian graded section wish extend result homogeneous prime ideal denotes graded injective hull see chapter end discuss detail graded structures underlying idea exist canonical choice grading considered graded however canonical grading considered graded remark graded since least one contained hence multiplication induces automorphism consequently isomorphism follows immediately integer category graded words integers sense tells canonical grading considered merely graded however see equipped natural eulerian graded structure point view indeed unique natural grading structure obtained via considering hpht denotes homogeneous localization respect inverting homogeneous elements natural grading follows remark grading hpht choose set homogeneous generators consider complex rfi since module natural grading differential also natural grading hence identify hpht natural grading proposition hpht category graded proof graded injective resolution resolution chapter integer depending notice integers remark resolution written hph let homogeneous localization homology apply apply becomes homogeneous localization get homology exactly get actually finishes proof remark structure since hph natural graded structure follows proposition also natural graded structure since graded natural ask following question question let homogeneous prime ideal natural grading making eulerian graded remark hpht always eulerian graded theorem propositions remark know category graded eulerian graded exactly one identify canonical contrary case consider graded see category graded otherwise would every hence would one choice eulerian wish find natural grading makes eulerian need following lemma may experts lemma canonical isomorphism exth homr proof hph homology apply homology kernel since homr left exact know homr hph isomorphic kernel homr homr exactly homology apply homr definition exthr want emphasize isomorphism obtained depend grading long equipped grading calculate exthr homr hph definition graded irrelevant maximal ideal defined max proposition min annhpht hence inclusion hpht hpht proof let let min annhpht lemma know exthr homr hph annhph know min exthr min exthr graded local duality min exthr max hence get second statement follows first one sending element annhpht degree remark discussed far see hpht graded injective module module inclusion hpht always sending lowest degree element annhpht therefore propose canonical grading effect identified hpht grading hpht obtained via complex end following proposition proposition compare theorem lemma given grading proposed remark eulerian minimal graded injective resolution resolution written proof clear grading fact unique grading hpht makes eulerian follows immediately calculation hpht using minimal resolution note references askey orthogonal polynomials special functions society industrial applied mathematics philadelphia brodmann sharp local cohomology algebraic introduction geometric applications cambridge studies advanced mathematics vol cambridge university press cambridge grothendieck locale des des morphismes inst hautes sci publ math katzman lyubeznik zhang two interesting examples characteristic bull lond math soc lucas sur les congruences des nombres les coefficients des functions suivant module premier bull soc math france lyubeznik finiteness properties local cohomology modules application commutative algebra invent math lyubeznik applications local cohomology characteristic reine angew math lyubeznik injective dimension approach lyubeznik proof basic result pure appl algebra zhang lyubeznik numbers projective schemes adv math zhang graded local cohomology bull lond math soc department mathematics university michigan ann arbor michigan address lquanma department mathematics university nebraska lincoln nebraska address
0
free applicative functors paolo capriotti ambrus kaposi university nottingham united kingdom university nottingham united kingdom pvc auk applicative functors generalisation monads allow expression effectful computations otherwise pure language like haskell applicative functors preferred monads structure computation fixed priori makes possible perform certain kinds static analysis applicative values define notion free applicative functor prove satisfies appropriate laws construction left adjoint suitable forgetful functor show free applicative functors used implement embedded dsls statically analysed introduction free monads haskell practically used construction given endofunctor free monad given simple inductive definition data free return free free typical use case construction creating embedded dsls see example free called term context functor usually obtained coproduct number functors representing basic operations resulting dsl minimal embedded language including operations one problem free monad approach programs written monadic dsl amenable static analysis impossible examine structure monadic computation without executing paper show similar free construction realised context applicative functors particular make following contributions give two definitions free applicative functor haskell section show equivalent section prove definition correct sense really applicative functor section free precise sense section present number examples use free applicative functors helps make code elegant removes duplication enables certain kinds optimizations possible using free monads describe differences expressivity dsls using free applicatives free monads section compare definition existing implementations idea section krishnaswami levy eds mathematically structured functional programming msfp eptcs capriotti kaposi work licensed creative commons attribution license capriotti kaposi applicative functors regarded monoids category endofunctors day convolution see instance example exists general theory constructing free monoids monoidal categories paper aim describe special case applicative functors using formalism accessible audience haskell programmers familiarity applicative functors required although helpful understand motivation behind work make use category theoretical concepts justify definition haskell code present also stand proofs paper carried using equational reasoning informally defined total subset haskell sections show interpret definitions proofs general locally presentable cartesian closed category category sets applicative functors applicative functors also called idioms first introduced generalisation monads provides lighter notation expressing monadic computations applicative style since used variety different applications including efficient parsing see section regular expressions bidirectional routing applicative functors defined following type class class functor applicative pure idea value type represents effectful computation returning result type pure method creates trivial computation without effect allows two computations sequenced applying function returned first value returned second since every monad made applicative functor canonical abundance monads practice haskell programming naturally results significant number practically useful applicative functors applicatives arising monads however widespread probably although relatively easy combine existing applicatives see example techniques construct new ones thoroughly explored far paper going define applicative functor freea haskell functor thus providing systematic way create new applicatives used variety applications meaning freea clarified section sake following examples freea thought simplest applicative functor built using example option parsers illustrate free applicative construction used practice take running example parser options tool simplicity limit interface accept options take single argument use double dash prefix option name example tool create new user unix system could used follows precise two canonical ways turn monad applicative functor opposite orderings effects free applicative functors username john fullname john doe parser could run argument list would return record following type data user user username string fullname string int deriving show furthermore given parser possible automatically produce summary options supports presented user tool documentation define data structure representing parser individual option specified type functor data option option optname string optdefault maybe optreader string maybe deriving functor want create dsl based option functor would allow combine options different types single value representing full parser stated introduction common way create dsl functor use free monads however taking free monad option functor would useful first sequencing options independent later options depend value parsed previous ones secondly monads inspected without running way obtain summary options parser automatically really need way construct parser dsl way values returned individual options combined using applicative interface exactly freea provide thus use freea option embedded dsl interpret type parser unspecified number options possibly different types run options would matched input command line arbitrary order resulting values eventually combined obtain final result type specific example expression specify command line option parser would look like userp freea option user userp user one option username nothing one option fullname one option nothing readint readint string maybe int need generic smart constructor capriotti kaposi one option freea option lifts option parser example web service client one applications free monads exemplified definition monads allowing express computations make use limited subset operations given following functor data webservice get url url params string result string post url url params string body string cont deriving functor free monad webservice allows definition application interacting web service convenience monad smart constructors defined two basic operations getting posting get url string free webservice string get url params free get url params return post url string string free webservice post url params body free post url params body return example one implement operation copies data one server another follows copy url string url string free webservice copy srcurl srcpars dsturl dstpars get srcurl srcpars post dsturl dstpars applications might need control operations going executed eventually run embedded program contained value type free webservice example web service client application executing large number get post operations might want rate limit number requests particular server putting delays hand parallelise requests different servers another useful feature would estimate time would take execute embedded web service application however way achieve using free monad approach fact even possible define function like count free webservice int returns total number operations performed value type free webservice see consider following example updates email field blog posts particular website updateemails string free webservice updateemails newemail entryurls get form words entryurls entryurl post entryurl updateemail newemail free applicative functors number post operations performed updateemails number blog posts determined pure function like count freea construction presented paper represents general solution problem constructing embedded languages allow definition functions performing static analysis embedded programs count freea webservice int simple example example applicative parsers idea monads flexible also explored context parsing swierstra duponcheel showed improve performance capabilities embedded language grammars giving expressivity monads basic principle weakening monadic interface applicative functor precisely alternative functor becomes possible perform enough static analysis compute first sets productions approach followed applicative functor defined keeps track first sets whether parser accepts empty string combined traditional monadic parser regarded applicative functor using generalised product described question whether possible express construction general form way given functor representing notion parser individual symbol input stream applying construction one would automatically get applicative functor allowing elementary parsers sequenced free applicative functors used end start functor describes elementary parser individual elements input returning values type freea parser used full input combines outputs individual parsers built yielding result type unfortunately applying technique directly results strictly less expressive solution fact since freea simplest applicative necessarily applicative also alternative instance case essential alternative type class defined follows class applicative alternative empty alternative instance gives applicative functor structure monoid empty unit element binary operation case parsers empty matches input string choice operator two parsers discuss issue alternative detail section definition free applicative functors obtain suitable definition free applicative functor generated functor first pause reflect one could naturally arrive definition applicative class via obvious generalisation notion functor given functor fmap method gives way lift unary pure functions effectful functions functions arbitrary arity capriotti kaposi example given value type regard nullary pure function might want lift value type similarly given binary function quite reasonable ask lifting something type functor instance alone provide either liftings liftings could define therefore natural define type class generalised functors able lift functions arbitrary arity class functor multifunctor fmap easy see fmapn defined terms example multifunctor however trying think laws type class ought observe multifunctor actually none applicative disguise fact exactly type pure easily convert vice versa fmap difference expects first two arguments types respectively combined single argument type always done single use fmap assume functor effectively equivalent nevertheless roundabout way arriving definition applicative shows applicative functor functor knows lift functions arbitrary arities overloaded notation express application fmapi defined referred idiom brackets given pure function arbitrary arity effectful arguments idiom bracket notation defined pure free applicative functors build expression formally using purel constructor corresponding pure infix constructor corresponding purel corresponding inductive definition data freeal purel infixl multifunctor typeclass idiom brackets freeal definition correspond left parenthesised canonical expressions built pure lists built concatenation two canonical forms also define canonical form applicative functors pure value sequence effectful functions applied pure replacing pure constructor pure infix constructor gives following expression pure corresponding inductive type data freea pure freea infixr freeal freea isomorphic see section pick version official definition since simpler define functor applicative instances instan functor functor freea fmap pure pure fmap fmap functor laws verified structural induction simply applying definitions using functor laws instan functor applicative freea pure pure sometimes called simplified form necessarily unique capriotti kaposi pure fmap fmap uncurry last clause applicative instance type need return value type freea since allows express applications functions uncurry get value type use recursively see section justification recursive call pair value type freea finally use constructor build result note analogy definition lists applications example option parsers continued using definition free applicative compose command line option parser exactly shown section definition userp smart constructor one lifts option functor representing basic operation embedded language term language implemented follows one option freea option one opt fmap const opt pure function computes global default value parser also defined parserdefault freea option maybe parserdefault pure parserdefault optdefault parserdefault section show definition free construction gives general ways structure programs specifically able define generic version one works functor exploiting adjunction describing free construction able shorten definition parserdefault define function listing possible options function parsing list command line arguments given arbitrary order section example web service client continued section showed embedded dsl web service clients based free monads support certain kinds static analysis however remedy using free applicative functor webservice fact count function definable freea webservice moreover limited particular example possible define count free applicative functor count freea int count pure count count free applicative functors static analysis embedded code also allows decorating requests parallelization instructions statically well rearranging requests server course extra power comes cost namely expressivity corresponding embedded language severely reduced using freea webservice urls servers requests sent must known advance well parameters content every request particular one posts server depend previously read another server operations like copy implemented summary examples applicative functors useful describing certain kinds effectful computations free applicative construct given functor specifying basic operations embedded language gives rise terms embedded dsl built applicative operators terms capable representing certain kind effectful computation described best help canonical form pure function applied effectful arguments calculation arguments may involve effects end arguments composed pure function means effects performed fixed specifying applicative expression case option parser example userp pure function given user constructor basic operation option defining option effects performed depend evaluator defined expression type freea option order effects depend implementation evaluator example one defines embedded language querying database constructs applicative expressions using freea one might analyze applicative expression collect information individual database queries defining functions similar count function web service example different possibly expensive duplicate queries merged performed instead executing effectful computations one one restricting expressivity language gain freedom defining evaluator works one might define parts expression embedded dsl using usual free monad construction parts using freea compose lifting free applicative expression free monad using following function functor freea free pure return free fmap fmap parts expression defined using free monad construction order effects fixed effects performed depend result previous effectful computations free applicative parts fixed structure effects depending monadic parts computation depend result static analysis carried applicative part test freea filesystem int free filesystem test let count result static analysis result applicative computation capriotti kaposi max read max max write blah possibility using results static analysis instead need specifying hand example would account counting certain function calls expression looking code make program less redundant parametricity order prove anything free applicative construction need make important observation definition constructor defined using existential type clear intuitively way given value form make use type hidden specifically function freea must defined polymorphically possible types could used existentially quantified variable definition make intuition precise assume form relational parametricity holds total subset haskell particular case constructor require freea freea natural transformation contravariant functors two contravariant functors could defined haskell using newtype newtype newtype freea freea instan functor contravariant contramap fmap instan functor contravariant contramap fmap action morphisms defined obvious way note make use fact freea functor naturality means given types function following holds freea fmap fmap unfolded definitions contramap removed newtypes note results actually imply naturality generality since type variable arbitrary functor instance concrete positive type expression together canonical instance however interpretation given sections freea defined way equation holds automatically free applicative functors isomorphism two definitions section show two definitions free applicatives given section isomorphic first functor freeal also functor instan functor functor freeal fmap purel purel fmap fmap functor laws verified simple structural induction constructor free theorem derived completely analogous way deriving equation equation states natural transformation freeal fmap fmap define functions convert two definitions functor freea freeal pure purel fmap flip functor freeal freea purel pure fmap flip also need fact natural transformation freeal fmap fmap proposition isomorphism inverse proof first prove freea compute using equational reasoning induction pure definition purel definition pure definition fmap flip definition fmap flip fmap flip capriotti kaposi equation fmap flip fmap flip inductive hypothesis fmap flip fmap flip equation fmap flip fmap flip functor fmap flip flip definition flip fmap functor next prove freeal compute using equational reasoning induction purel definition pure definition purel definition fmap flip definition fmap flip fmap flip inductive hypothesis fmap flip fmap flip equation fmap flip fmap flip freeal functor fmap flip flip definition flip fmap freeal functor next sections prove freea free applicative functor isomorphism two definitions results carry freeal free applicative functors applicative laws following laws applicative instance pure pure pure pure pure pure pure introduce abbreviations help make notation lighter uncurry pair lemma freea freea following equation holds fmap fmap proof compute fmap pure definition fmap fmap freea functor fmap definition pure definition fmap fmap pure fmap definition fmap fmap pair definition fmap fmap fmap pair functor fmap pair functor fmap fmap pair definition capriotti kaposi fmap definition fmap fmap lemma property holds freea freea freea freea pure proof suppose first pure pure pure definition pure definition fmap lemma fmap definition pure tackle case freea need define helper function compute pure definition pure fmap definition composition fmap definition fmap fmap pair functor definition fmap pair free applicative functors definition fmap fmap pair pair functor definition fmap pair pair definition fmap pair pair functor fmap fmap pair pair equation fmap fmap pair pair lemma times freea functor times fmap pure fmap induction hypothesis fmap fmap fmap definition lemma property holds freea freea pure pure proof form pure conclusion follows immediately let assume therefore lemma true structurally smaller values pure definition fmap pair pure definition pair fmap fmap pure induction hypothesis fmap fmap pure fmap freea functor fmap fmap equation fmap fmap functor fmap definition fmap freea fmap capriotti kaposi definition pure proposition freea applicative functor proof properties straightforward verify using fact freea functor properties follow lemmas respectively freea left adjoint going make statement freea free applicative functor precise first define category applicative functors show freea functor freea category endofunctors hask saying freea free applicative amounts saying freea left adjoint forgetful functor definition let two applicative functors applicative natural transformation polymorphic function satisfying following laws pure pure define type applicative natural transformations write haskell type appnat laws implied similarly pair functors define type nat type natural transformations note parametricity polymorphic functions automatically natural transformations categorical sense nat fmap fmap clear applicative functors together applicative natural transformations form category denote similarly functors natural transformations form category free applicative functors proposition freea defines functor proof already showed freea sends objects functors case applicative functors need define action freea morphisms natural transformations case liftt functor functor nat appnat freea freea liftt pure pure liftt liftt first verify liftt applicative natural transformation satisfies laws use equational reasoning proving law liftt pure definition pure liftt pure definition liftt pure definition pure pure law use induction size first argument explained section base cases liftt pure pure definition liftt fmap pure definition fmap liftt pure definition liftt pure definition fmap fmap pure definition pure pure definition liftt liftt pure liftt pure liftt pure definition liftt fmap definition fmap liftt fmap definition liftt fmap liftt natural capriotti kaposi fmap liftt definition fmap fmap liftt definition pure liftt definition liftt liftt pure liftt inductive case liftt definition liftt fmap uncurry fmap definition liftt fmap uncurry liftt fmap inductive hypothesis fmap uncurry liftt fmap liftt liftt natural fmap uncurry fmap liftt liftt natural fmap uncurry fmap liftt liftt definition liftt liftt definition liftt liftt liftt need verify liftt satisfies functor laws liftt liftt liftt liftt proof straightforward structural induction going need following natural transformation unit adjunction one functor nat freea one fmap const pure embeds functor freea used specialization function option section lemma one free applicative functors proof given easy verify uncurry const one definition one fmap const pure definition functor law fmap uncurry const fmap equation functor law fmap uncurry const equation proposition freea functor left adjoint forgetful functor graphically lower homf freea homa raise proof given functor applicative functor define natural bijection nat appnat freea raise functor applicative nat appnat freea raise pure pure raise raise lower functor applicative appnat freea nat lower one routine verification shows raise lower natural proof raise satisfies applicative natural transformation laws straightforward induction structure proof liftt satisfies laws proposition show inverses reason induction calculate one direction raise lower pure definition raise pure capriotti kaposi applicative natural transformation pure definition pure pure raise lower definition raise lower raise lower induction hypothesis lower definition lower one applicative natural transformation one lemma direction lower raise definition lower raise one definition one raise fmap const pure definition raise fmap const pure natural fmap const pure fmap pure applicative functor pure const pure natural pure pure const applicative law pure pure pure const applicative law applied twice pure applicative law example option parsers continued help adjunction defined raise lower able define useful functions case option parsers example used computing global default value parser free applicative functors parserdefault freea option maybe parserdefault raise optdefault extracting list options parser alloptions freea option string alloptions getconst raise opt const optname opt alloptions works first defining function takes option returns list name option lifting const applicative functor raise function thought way define semantics whole syntax dsl corresponding freea given one individual atomic actions expressed natural transformation functor applicative functor defining semantics using raise resulting function automatically applicative natural transformation circumstances however convenient define function pattern matching directly constructors freea like target obvious applicative functor structure makes desired function applicative natural transformation example write function runs applicative option parser list arguments accepting order matchopt string string freea option maybe freea option matchopt pure nothing matchopt opt value opt optname fmap optreader value otherwise fmap matchopt opt value matchopt function looks options parser match given argument successful returns modified parser option replaced pure value clearly matchopt opt value applicative since instance equation satisfied runparser freea option string maybe runparser opt value args ase matchopt opt value nothing nothing runparser args runparser parserdefault runparser nothing finally runparser calls matchopt successive pairs arguments arguments remain point uses default values remaining options construct result capriotti kaposi totality proofs paper apply total fragment haskell completely ignore presence bottom haskell subset use given semantics locally presentable cartesian closed category fact assume functors used throughout paper accessible inductive definitions regarded initial algebras accessible functors example realise freea assume regular cardinal define functor category endofunctors locally presentable proposition inductive definition freea regarded initial algebra given denotes internal hom exponential since locally presentable cocomplete coend exists lemma lemma provided large enough furthermore functor accessible proposition hence initial algebra equation trivial consequence definition function definitions use primitive recursion realised using universal property initial algebra directly one exception definition fmap uncurry contains recursive call first argument namely structurally smaller original one prove function nevertheless well defined introduce notion size values type freea size freea size pure size size conclude definition made sense target category need show size argument recursive call smaller size original argument immediate consequence following lemma lemma function freea size fmap size proof induction size fmap pure definition fmap free applicative functors size pure definition size definition size size pure size fmap definition fmap size fmap definition size size definition size size proofs using induction carry induction size first argument size defined size function semantics section establish results accessible functors locally presentable categories used section justify inductive definition freea begin technical lemma lemma suppose following diagram categories functors locally presentable accessible inclusion dense small full subcategory compact objects pointwise left kan extension along exists equal left kan extension along furthermore locally presentable accessible accessible proof let regular cardinal presentable pointwise left kan extension obtained colimit colim indices range comma category show exists therefore enough prove colimit realised small colimit colim capriotti kaposi express canonical colimit compact objects colim since preserves colimits get morphism colim colim gives cocone colimit straightforward verification shows universal second statement suppose also presentable possibly increasing assume exists small every object first part filtered colimit commutes compact commutes copowers left adjoints commutes coends colimits therefore accessible let categories finite products functors definition day convolution denoted pointwise left kan extension diagonal functor following diagram note day convolution two functors might exist certainly small cocomplete lemma suppose locally presentable accessible day convolution exists accessible proof immediate consequence lemma lemma suppose cartesian closed day convolution obtained coend free applicative functors proof coend calculus proposition let regular cardinal locally category functors locally proof let dense small full subcategory obvious functor func equivalence categories inverse given left kan extensions along inclusion func locally see example corollary proposition let regular cardinal locally day convolution two functors exists lemma day convolution operator functor proof enough show preserves filtered colimits pointwise two variables separately clear since filtered colimits commute finite products copowers coends recast equation terms day convolution follows equation makes precise intuition free applicative functors sense lists free monoids fact functor exactly one appearing usual recursive definition lists case construction happening monoidal category accessible endofunctors equipped day convolution also sketch following purely categorical construction free applicative lax monoidal functors essential rest paper quite easy consequence machinery developed section idea perform list construction one step instead iterating individual day convolutions using recursion namely category let free monoidal category generated objects resp morphisms lists objects resp morphisms clearly accessible finite products functor capriotti kaposi maps list corresponding product note accessible furthermore assigment extends cat preserves accessibility functors free applicative functor simply defined kan extension along functor accessible appropriate generalisation lemma hard see lax monoidal see example proposition omit proof free object obtained diagram chasing using universal property kan extensions related work idea free applicative functors entirely new number different definitions free applicative functor given haskell functor none includes proof applicative laws first author paper published specific instance applicative similar example shown section example later expanded haskell library command line option tom ellis proposes definition similar uses separate inductive type case corresponding constructor observes law probably holds existential quantification provide proof solve problem deriving necessary equation free theorem gives another similar version presents redundancies thus fails obey applicative laws example pure easily distinguished using function like count defined pattern matching constructors however remedied exposing limited interface includes equivalent raise function pure free constructors probably impossible observe violation laws using reduced interface also means definitions pattern matching like one matchopt section prohibited free package contains definition essentially identical freeal differing order arguments another approach differs significantly one presented paper underlies definition contained package uses encoding constraintkinds ghc extension generalise construction free applicative superclass functor http http http http http http free applicative functors idea use fact functor left adjoint monad codensity monad right kan extension along taking forgetful functor one obtain formula using expression right kan extension end one problem approach applicative laws make definition category left implicit universal quantification used represent end fact specializing code applicative constraint get data runfreea instan functor functor fmap fmap instan functor applicative pure pure law hold example need prove term pure equal strictly speaking false terms distinguished taking functor applicative instance satisfy law constant function returning counterexample intuitively however laws hold provided never make use invalid applicative instances make intuition precise one would probably need extend language quantification equations prove parametricity result extension another problem church encoding like solution presents limited interface thus harder use fact destructor runfreea essentially equivalent raise function used define applicative natural transformation function like matchopt applicative could defined direct way discussion work presented practical definition free applicative functor haskell functor proved properties showed applications examples paper show free applicative functors solve certain problems effectively applicability somewhat limited example applicative parsers usually need alternative instance well free applicative construction provide one possible direction future work trying address issue modifying construction yield free alternative functor instead unfortunately satisfactory set laws alternative functors simply define alternative functor monoid object many commonly used instances become invalid like one maybe using rig categories lax functors formalise alternative functors seems workable strategy currently exploring another direction formalizing proofs paper proof assistant embedding total subset haskell consideration type theory dependent types attempts replicate proofs agda failed far subtle issues interplay parametricity encoding existentials dependent sums capriotti kaposi particular equation inconsistent representation existential type definition freea example terms like const pure pure equal equation obviously distinguished using large elimination surprising repeatedly made use size restrictions sections definitely need somehow replicated predicative type theory like one implemented agda reasonable compromise develop construction containers one prove free applicative functor given using notation end section regarded discrete category another possible development results paper trying generalise construction free applicative functor functors monoidal category section focused categories finite products clear monoidal categories natural setting evidenced appearance corresponding cat furthermore applicative functor defined lax monoidal functor strength completely ignore strengths paper could remedied working general setting monoidal category acknowledgements would like thank jennifer hackett thorsten altenkirch venanzio capretta graham hutton edsko vries christian sattler helpful suggestions insightful discussions topics presented paper references michael abott thorsten altenkirch neil ghani containers constructing strictly positive types theoretical computer science applied semantics selected topics locally presentable accessible categories cambridge university press brian day construction biclosed categories thesis university new south wales kelly unified treatment transfinite constructions free algebras free monoids colimits associated sheaves bulletin australian mathematical society simon marlow haskell language report conor mcbride ross paterson applicative programming effects journal functional programming ross paterson constructing applicative functors mathematics program construction lecture notes computer science john reynolds types abstraction parametric polymorphism ifip congress doaitse swierstra luc duponcheel deterministic combinator parsers advanced functional programming lecture notes computer science wouter swierstra data types carte journal functional programming free applicative functors philip wadler theorems free functional programming languages computer architecture acm press
6
dec conditions stability convergence stochastic approximations applications approximate value fixed point iterations arunselvan ramaswamy dept electrical engineering information technology paderborn university paderborn germany shalabh bhatnagar dept computer science automation indian institute science bengaluru india december abstract main aim paper development easily verifiable sufficient conditions stability almost sure boundedness convergence stochastic approximation algorithms saas meanfields class algorithms become important recent times paper provide complete analysis algorithms three different yet related sets sufficient conditions based existence associated lyapunov function unlike previous lyapunov function based approaches provide simple recipe explicitly constructing lyapunov function needed analysis work builds works abounadi bertsekas borkar munos ramaswamy bhatnagar important motivation flavor assumptions comes need understand dynamic programming reinforcement learning algorithms use deep neural networks dnns function approximations parameterizations algorithms popularly known deep learning algorithms important application theory provide complete analysis stochastic approximation counterpart approximate value iteration avi important dynamic programming method email email shalabh designed tackle bellman curse dimensionality assumptions involved significantly weaker easily verifiable truly theory presented paper also used develop analyze first saa finding fixed points contractive maps introduction stochastic approximation algorithms saas important class iterative schemes used solve problems arising stochastic optimization stochastic control machine learning financial mathematics among others saas constitute powerful tool due model free approach solving problems first stochastic approximation algorithm developed robbins monro solve root finding problem important contributions modern stochastic approximations theory made hirsch borkar borkar meyn hofbauer sorin name important aspect analysis saas lies verifying almost sure boundedness stability iterates hard many applications paper present easily verifiable sufficient conditions stability convergence saas specifically consider following iterative scheme sequence subsets marchaud map noise sequence present three different yet overlapping sets easily verifiable conditions stability almost sure boundedness convergence closed connected internally chain transitive invariant set reader referred section analysis problem stability saas previously studied ramaswamy bhatnagar developed first set sufficient conditions stability convergence extending ideas borkar meyn sufficient conditions based limiting properties associated scaled differential inclusion contrary conditions presented paper based local properties associated differential inclusion believe stability criterion presented applicable scenarios sense orthogonal readily use stability criterion work contributes literature saas presenting first set lyapunov function based sufficient conditions stability convergence also sets sufficient conditions stability convergence ones presented paper present lyapunov function based stability conditions analyses saas important motivation lies proliferation dynamic programming reinforcement learning methods based value iteration policy gradients use deep neural networks dnns function approximations parameterizations respectively use dnns often causes algorithms finite time find solutions paper present sufficient conditions guarantee occurs conditions guarantee suboptimal solutions found close optimal solution reader referred sections details work builds work abounadi bertsekas borkar ramaswamy bhatnagar stability criterion dependent part possibility comparing various instances algorithm analyzed exact nature comparison reader referred assumption section stated earlier present three sets assumptions qualitatively different yet overlapping stability convergence consequence framework developed herein semantically rich enough cover multitude scenarios encountered reinforcement learning stochastic optimization applications answer following important question mere existence lyapunov function imply almost sure boundedness stability show existence lyapunov function allows construct inward directing set see proposition details use inward directing set develop partner projective scheme scheme shown converge point inside previously constructed inward directing set order show stability compare aforementioned partner projective scheme exact nature comparison outlined imperative partner projective scheme comparable words seems mere existence lyapunov function insufficient ensure stability additional assumptions section needed purpose demonstrate verifiability assumptions using framework comprehensively analyze two important problems approximate value iteration methods possibly biased approximation errors saas finding fixed points contractive maps worth noting analysis approximate value iteration methods distinguish biased unbiased approximation errors section application main results present complete analysis approximate value iteration methods important class dynamic programming algorithms significantly weaker set assumptions value iteration important dynamic programming method used numerically compute optimal value function markov decision process mdp however well known many important applications suffers bellman curse dimensionality approximate value iteration avi methods endeavor address bellman curse dimensionality introducing approximation operator allows approximations every step classical value iteration method approximation errors allowed unbounded algorithm may converge see details avis bounded approximation errors previously studied sekas tsitsiklis studied scenarios wherein approximation errors uniformly bounded states munos extended analysis allowing approximation errors bounded weighted sense infinite horizon discounted cost problem however convergence analysis requires transition probabilities future state distributions smooth detailed comparison results concerning avi methods already present literature see section important contribution paper providing convergence analysis avis without aforementioned restriction transition probabilities future distributions section analysis encompasses stochastic shortest path discounted cost infinite horizon problems analyzing stochastic iterative avis see section details stocastic iterative avis stability almost sure boundedness iterates normally assumed hold stated stability hard assumption verify unclear introduction approximation operator leads unstable iterates thus important contribution paper showing stability stochastic iterative avis weak verifiable conditions section shown stochastic iterative avi converges possibly suboptimal vector belongs small neighborhood optimal vector shown size neighborhood directly proportional magnitude approximation errors see theorems details thus section provide complete analysis stability convergence general avi methods weak easily verifiable set sufficient conditions eliminate previous restrictions smoothness transition probabilities future distributions also allow general operational noise compared previous literature important aspect analysis encompasses stochastic shortest path infinite horizon discounted cost problems provide unified analysis stability convergence avi methods wherein approximation errors bounded respect multiple norms finally believe theory developed herein useful providing theoretical foundation understanding reinforcement learning dynamic programming algorithms use dnns area garnered significant interest recently section another important application framework develop analyze first time general saas finding fixed points maps fixed point theory active area research due applications multitude disciplines contribution front analyzing stochastic approximation algorithms finding fixed points contractive setvalued maps see section details mentioned show algorithms bounded almost surely converge sample path dependent fixed point map consideration best knowledge first saa complete analysis finding fixed points maps organization paper section list definitions notations used paper section present three sets sufficient conditions stability convergence closed connected internally chain transitive invariant set associated saas sections present main results theorems section analyze stochastic iterative avi methods see theorems section develop analyze saa finding fixed points contractive maps see theorem main result detailed discussion assumption crucial analysis provided section finally section provides concluding remarks definitions notations definitions notations encountered paper listed section map say given sequences marchaud map map subsets called marchaud satisfies following properties convex compact boundedness sup kwk kxk iii semicontinuous let marchaud map differential inclusion given guaranteed least one solution absolutely continuous reader referred details say absolutely continuous map satisfies semiflow associated pis defined let define limit set solution limit set solution given let defined similarly limit set solution given invariant set invariant every exists trajectory entirely note definition invariant set used paper positive invariant set open closed neighborhoods set let inf define neighborhood neighborhood defined open ball radius around origin represented closed ball represented internally chain transitive set said internally chain transitive compact every following exists solutions differential inclusion points real numbers greater sequence called chain attracting set fundamental neighborhood attracting compact exists neighborhood called fundamental neighborhood addition compact attracting set also invariant called attractor basin attraction given lyapunov stable set lyapunov stable assumptions consider following iteration subsets given sequence given noise sequence make following assumptions subsets marchaud map sup kzk kxk given marchaud constant sequence constant lim min note assumption presents set lyapunov conditions associated two sets alternative conditions also presented subsequently section associated differential inclusion compact set bounded open neighborhood function strongly positively invariant iii continuous function let two sequences generated common probability space noise sequence sup kxn note follows proposition hofbauer sorin contains lyapunov stable attracting set exists attractor contained whose basin attraction contains fundamental neighborhoods small values note assumptions noise weakened include general noise sequences later see section also similar assumption used abounadi case regular saas maps reader referred section detailed discussions assumption define open sets following conditions satisfied fundamental neighborhood attractor contained since continuous follows open relative assumption implies small values may noted closure let vrb vrc chosen small enough ensure vrc conditions hold note condition automatically satisfied since present alternative assumption associated compact set bounded open neighborhood function strongly positively invariant iii upper semicontinuous function bounded words sup difference statements iii iii contributes qualitative difference assumptions follows proposition hofbauer sorin contains attractor set whose basin attraction contains case define open sets satisfying stated first prove sets form open relative expected proposition sup set open relative proof proof suppose open relative exists every follows boundedness exists since follows upper semicontinuity since inf get contradiction prove second part proposition enough show every since open relative left show since exists follows upper semicontinuity boundedness lim sup case small values ready define define possible small values choose open possible since compact open remark let suppose given differential inclusion associated attractor set strongly positive invariant neighborhood define lyapunov function found remark section words given max increasing function claim satisfies verify claim consider following strongly positive invariant sup hence sup follows upper semicontinuity sup satisfied left show iii also satisfied fix follows definition max max max rhs equation consider one final alternative global attractor upper semicontinuous function iii define open sets satisfying conditions see statement recall define appropriate satisfying sup sup suppose unable find may choose open set satisfying required properties fixed mentioned remark remark explicitly constructed local lyapunov function satisfying similarly construct global lyapunov function satisfying define function max defined remark lyapunov function satisfies proof similar one found remark inward directing sets given differential inclusion open set said inward directing set respect aforementioned differential inclusion whenever clearly inward directing sets invariant also follows solution starting boundary directed inwards proposition open sets constructed accordance assumptions respectively inward directing sets respect proof proof recall set constructed every every since follows iii words left show follows directly observation every similarly shown inward directing follows use assumptions existence inward directing set respect associated prove stability consequence proposition may verify one among ensure existence inward directing set may noted assumptions qualitatively different however primary role help find one aforementioned inward directing sets depending nature iteration analyzed may easier verify one others analysis projective scheme begin section minor note notations since roles indistinguishable shall refer generically similar manner generically referred respectively note also define projection map subsets follows otherwise order prove stability compared following projective scheme note initial point first projected starting projective scheme equation rewritten let construct linearly interpolated trajectory using begin dividing diminishing intervals using sequence let linearly interpolated trajectory defined follows constructed trajectory right continuous limits lim lim exist jumps occur exactly corresponding also define three constant trajectories follows trajectories also right continuous limits define linearly interpolated trajectory associated follows define trajectories using constructed trajectories xln xcn ycn gnc wln lemma xln xln proof ycn wln gnc fix following proof xln let express form equation note following xln unfolding equation till xln yields xln xln make following observations gnc wln wln wln consequence observations becomes xln xln ycn wln gnc fix xln gnc viewed subsets equipped skorohod topology may use theorem show relatively compact see billingsley details theorem states following set relatively compact following conditions satisfied sup sup lim sup sup lim sup sup lim sup sup min xln gnc bounded two discontinuities separated least fixed four conditions satisfied see details lemma xln gnc relatively compact equipped skorohod topology proof proof recall since markyk sup chaud follows sup sup constant independent words sequences xln gnc bounded remains show two discontinuities separated let min sup sup kyk clearly define max jump follows definition words discontinuities interval fix two discontinuities separated least since arbitrary follows xln gnc relatively compact since wln pointwise bounded assumption continuous relatively compact follows limit wln constant function suppose consider xln xln ycn gnc along aforementioned noise trajectories converge limits identical consider along converge let suppose limit constant function shown limit proof along lines proof theorem chapter borkar suppose every limit constant function whenever every limit solution suppose show aforementioned statement true every along stability implied note set infinite cardinality since two discontinuities least apart lemma let without loss generality let xln gnc convergent xln proof proof begin making following observations two discontinuities least apart solutions starting points hit boundary later remain interior observation consequence proposition follows observations small values inf follows nature convergence kxln kxln large values xln exactly one jump point discontinuity let call point discontinuity let large values kxln xln also xln xln since follows xln hence similarly observe solution since since inward directing since since consequence choice within context hence may fix fix follows proposition let without loss generality else may choose along convergent thus since solution starting point aforementioned conclusion contradicts iii iii words thus shown jump suppose holds follows proposition hofbauer sorin attracting set within basin attraction suppose holds globally attracting set lemma projective stochastic approximation scheme given converges attractor proof proof begin noting lemma arbitrary since certain number iterations projections could sample path dependent follows lemma projective scheme given tracks solution words projective scheme given converges limit point iterates given within sometime track solution since within basin attraction iterates converge main results stability convergence section show iterates given stable bounded almost surely converge closed connected internally chain transitive invariant set associated show stability comparing iterates generated generated stated assume noise sequence exactly theorem iterates given stable bounded almost surely converge closed connected internally chain transitive invariant set associated proof proof notational convenience use iterates generated projective scheme iterates generated recall see lemma words exists possibly sample path dependent words sup supkxk follows sup kxn surely exists possibly sample path dependent directly leads stability iterates given satisfy assumptions shown also stable follows theorem lemma iterates converge closed connected internally chain transitive invariant set associated general noise sequences restriction noise sequence rather strict since allows bounded noise section show theorem continue hold even noise sequence generally square integrable martingale difference sequence analysis regular saas shown assumption noise sequence square integrable martingale difference sequence kxn without loss generality may assume equal otherwise use maximum two constants analysis projective scheme given assumption used lemma specifically used show two discontinuities xnl gnc separated least show aforementioned property holds replaced first prove auxiliary result lemma let lim fix arbitrary consider following within context projective scheme given follows proof proof need show chebyshev inequality since martingale difference sequence get within context projective scheme given almost surely xin sup kxn follows ekxn hence equation becomes since finally get let consider scenario find separation two points discontinuity words exists lim without loss generality assume jumps note sup kyn equation becomes kxl kxl since large directly contradicts lemma hence always find separating two points discontinuity lemma used ensure convergence attractor theorem used ensure convergence closed connected internally chain transitive invariant set associated specifically ensures convergences let define converges trivially follows martingale noise sequence satisfies show convergence enough show corresponding quadratic variation converges almost surely process words need show consider following convergence quadratic variation process context lemma follows fact ekxn sup words sup similarly convergence theorem follows stability iterates supkxn sup kxn words lemma theorem assumption satisfied following generalized version theorem direct consequence observations made theorem iterates given stable bounded almost surely converge closed connected internally chain transitive invariant set associated application approximate value iteration methods section present analysis recursion bellman operator approximation operator specifically consider following stochastic approximation counterpart scheme bellman operator sequence satisfying iii approximation error stage martingale difference noise sequence let call stochastic iterative avi worth noting distinguish stochastic shortest path infinite horizon discounted cost problems definition bellman operator changes appropriately make following assumptions bellman operator contractive respect weighted unique fixed point unique globally asymptotically stable equilibrium point fixed recall definition weighted given max later section analyze approximation errors bounded general weighted sense weighted euclidean norms readily satisfied many applications see section bertsekas tsitsiklis details first let consider couple technical lemmas lemma convex compact subset proof proof first show convex given need show show compact define max since follows bounded set left show closed let every since lim inf kyn follows lemma map given marchaud map proof proof since compact convex set follows compact convex since contraction map let min max observe standard euclidean norm also observe kzk hence kxk sup kzk sup kzk follows lemma sup kzk finite hence sup kzk kxk show upper semicontinuous let since continuous hence lim inf kyn words let define hausdorff metric respect weighted follows given max max min min given exist kzk consider set equilibrium points small values follows set inequalities belongs small neighborhood similarly small values minor perturbation recall globally asymptotic stable equilibrium point follows upper semicontinuity attractors small values exists within small neighborhood global attractor perturbed system show converges may construct global lyapunov function attractor illustrated remark words satisfies hence find sets let consider following projective approximate value iteration worth noting noise sequences identical following analysis section conclude ready analyze theorem stable converges point approximation errors proof proof start showing satisfies assumption earlier showed implies exists possibly sample path dependent grouping terms interest inequality get consequence equation becomes consider following two cases case case becomes simplifying equation get case case becomes simplifying equation get may thus conclude following applying set arguments proceeding recursively kjn may conclude kjn kjn words sup kjn follows sup kjn euclidean norm arguments inspired abounadi bertsekas borkar jaakkola jordan singh shown satisfies follows theorem iterates given track solution closed connected internally since global attractor chain transitive invariant set follows show set equilibrium points implying proceeding consider following theorem aubin cellina theorem chapter let upper semicontinuous map closed subset compact convex values solution trajectory converges equilibrium already shown supkjn words exists large compact convex set possibly sample path dependent chosen tracking solution also inside asymptotically follows conditions stated theorem satisfied hence every limit point equilibrium point words remark bound approximation errors decreases size limiting set corresponding approximate value iteration given also decreases specifically approximation errors bounded weighted sense approximation errors encountered hitherto section bounded weighted sense stated earlier errors often consequence approximation operators used counter bellman curse dimensionality setting typically one given data form unbiased estimates objective function note role approximation operator may played supervised learning algorithm algorithm would return good fit within class functions objective algorithms would minimize approximation errors previously considered approximation operators minimize errors weighted sense relevant largescale applications may possible approximate states uniformly many applications approximation operators work minimizing errors norms see munos details section consider general case approximation errors bounded weighted sense specifically analyze fixed recall definition weighted given recall bellman operator contractive respect first establish relationship weighted weighted fix defined lemma hence max similarly following inequality section consider following stochastic iterative avi scheme define previous subsection show marchaud map may state identical theorem theorem stable converges point approximation errors proof proof proof theorem need show satisfies follows setting theorem follows satisfies regards convergence using similar arguments show converges set equilibrium points given previous subsection comparison previous literature important contribution understanding convergence approximate value iteration avi methods due munos paper analyzed avi methods infinite horizon discounted case problem wherein approximation errors bounded weighted sense significant improvement considered max norms however basic procedure considered numerical avi scheme complete knowledge system model transition probabilities assumed addition convergence shown one following two assumptions transition probabilities let distribution state space states policy distribution policies dissuch smoothness constand counted future state distribution given discount factor smoothness requirements transition probabilities strong hold large class systems worth noting smoothness requirements eliminated analysis consider stochastic approximation counterpart avi involves operational noise component addition model general martingale difference noise sequence algorithm arises instance procedure bellman operator fact operator bellman operator corresponding given stationary policy see chapter model information assumed known system transition probabilities setting case thus convergence analysis works case avi schemes information transition probabilities known restrictions imposed transition probabilities measurement error albeit bounded may arise instance use function approximation require unique globally asymptotically stable equilibrium bellman operator contraction map since unique solution equation natural expect aforementioned requirement holds true also important note analysis works stochastic shortest path infinite horizon discounted cost problems value iteration important reinforcement learning algorithm stated section case problems avi methods used obtain suboptimal solutions arising say use function approximation techniques showing stochastic approximation counterpart bounded almost surely hard many reinforcement learning applications thus one significant contributions paper addressed previous literature lies development easily verifiable sufficient conditions almost sure boundedness avi methods involving dynamics application finding fixed points maps section showed stochastic iterative avi given converges vector belongs small neighborhood optimal vector started observing fixed points perturbed bellman operator belong small neighborhood consequence upper semicontinuity attractor sets showed converges fixed point perturbed bellman operator thereby showing converges small neighborhood section generalize ideas section develop analyze saa finding fixed points contractive maps suppose given map subsets present sufficient conditions following stochastic approximation algorithm bounded converges fixed point given sequence satisfying iii martingale difference noise sequence satisfying definitions given metric space define hausdorff metric respect follows min min min call map contractive say bounded diameter diam define diam sup impose following restrictions marchaud map bounded diameter contractive respect metric metric let denote set fixed points exists compact subset along strongly positive invariant bounded open neighborhood unique global attractor since assumed contractive respect follows theorem nadler least one fixed point assumption readily satisfied popular metric norms weighted weighted among others assumption imposed ensure satisfies specifically imposed ensure existence inward directing set associated see proposition details words find bounded open sets inward directing section compare projective counterpart given identical projection operator defined beginning section analysis projective scheme proceeds identical manner section specifically may show every limit point projective scheme belongs following theorem immediate theorem iterates given bounded almost surely limit point fixed point map proof proof proof theorem proceeds similar manner theorem provide outline avoid repetition begin showing bounded almost surely stable comparing since limit points belong exists possibly sample path dependent following inequality diam every recall contraction parameter map consider two possible cases case case shown case case shown conclude following follows inequality satisfies assumption hence get bounded almost surely stable since iterates stable follows theorem chapter every limit point equilibrium point map words limit point hence shown every limit point fixed point map remark assumed bounded diameter see primary task assumption showing almost sure boundedness specifically used show satisfied depending problem hand one may wish away bounded diameter assumption example may sup diam bounded diameter assumption dispensed since marchaud bounded sup kzk kxk words diam kxk theory pointwise boundedness allows unbounded diameters diam kxk bounded diameter assumption prevents scenario happening applications use approximate operators often reasonable assume errors due approximations bounded associated map naturally bounded diameter reader referred section example setting note assumption section investigate following question conditions stability saa guaranteed provided known priori corresponding projective scheme convergent lemma abounadi question answered simple iterative schemes make investigation setting saas prove following lemma lemma let open bounded subsets consider algorithm make following assumptions random sequence constitutes noise bounded diameter contractive first second fixed respect metric words sup exists sequence generated converges vector bounded almost surely proof proof since exists following details inequality identical inequality given unfolding right hand side stage get following words sup hence bounded almost surely conclusions briefly provide summary results hitherto presented paper developed three sets sufficient conditions stability almost sure boundedness convergence stochastic approximation algorithms general conditions noise lyapunov function based assumptions presented general easily verifiable truly previous lyapunov function based approaches verifiability assumptions suffers need explicitly construct said lyapunov function however provided recipe explicitly construct required lyapunov function see remarks details moreover lyapunov function based stability conditions first literature stochastic approximations maps framework lends naturally analysis stochastic iterative counterpart avi methods algorithms become important recent years due proliferation dnns function approximations parameterizations consequence framework avi methods analyzed significantly relaxed set assumptions previous literature showed particular stochastic iterative avi bounded almost surely converges vector belonging small neighborhood optimal interesting consequence analysis fact limiting set avi methods fixed points perturbed bellman operator worth noting framework used analyze approximate policy gradient methods well pertain case unlike assumes complete knowledge policy gradients knowledge approximate policy gradients gradients errors finally demonstrated generality theory developing analyzing first saa finding fixed points contractive maps best knowledge prior work direction first algorithm finding fixed points maps references abounadi bertsekas borkar stochastic approximation nonexpansive maps application algorithms siam journal control optimization aubin cellina differential inclusions maps viability theory springer dynamical system approach stochastic approximations siam control hirsch asymptotic pseudotrajectories chain recurrent flows applications dynam differential equations hofbauer sorin stochastic approximations differential inclusions siam journal control optimization pages bertsekas tsitsiklis programming athena scientific edition billingsley convergence probability measures john wiley sons borkar stochastic approximation two time scales syst control borkar stochastic approximation dynamical systems viewpoint cambridge university press borkar meyn method convergence stochastic approximation reinforcement learning siam control optim farias van roy existence fixed points approximate value iteration learning journal optimization theory applications jaakkola jordan singh convergence stochastic iterative dynamic programming algorithms advances neural information processing systems pages munos error bounds approximate value iteration proceedings national conference artificial intelligence volume page nadler contraction mappings pacific journal mathematics ramaswamy bhatnagar generalization theorem stochastic recursive inclusions mathematics operations research herbert robbins sutton monro stochastic approximation method annals mathematical statistics pages sutton mcallester singh mansour policy gradient methods reinforcement learning function approximation advances neural information processing systems pages
3
sensor selection target tracking wireless sensor networks uncertainty nianxia cao student member ieee sora choi oct engin masazade member ieee pramod varshney fellow ieee abstract paper propose multiobjective optimization framework sensor selection problem uncertain wireless sensor networks wsns uncertainties wsns result set sensor observations insufficient information target propose novel mutual information upper bound miub based sensor selection scheme low computational complexity fisher information based sensor selection scheme gives estimation performance similar mutual information based sensor selection scheme without knowing number sensors selected priori multiobjective optimization problem mop gives set sensor selection strategies reveal different two conflicting objectives minimization number selected sensors minimization gap performance metric miub sensors transmit measurements selected sensors transmit measurements based sensor selection strategy illustrative numerical results provide valuable insights presented index terms target tracking sensor selection fisher information mutual information information fusion multiobjective optimization wireless sensor networks cao choi varshney department electrical engineering computer science syracuse university syracuse usa email ncao varshney masazade department electrical electronics engineering yeditepe university istanbul turkey email work cao choi varshney supported air force office scientific research afosr grant work masazade supported scientific technological research council turkey tubitak grant preliminary version paper appears ieee international conference information fusion october draft ntroduction wireless sensor network wsn composed large number densely deployed sensors sensors devices limited signal processing capabilities programmed networked properly wsns useful many application areas including battlefield surveillance environment monitoring target tracking industrial processes health monitoring control work presented paper task wsn track target emitting reflecting energy given region interest roi sensors send observations regarding target central node called fusion center responsible final inference target tracking problems often require coverage broad areas large number sensors densely deployed roi results new challenges resources bandwidth energy limited situations inefficient utilize sensors roi including uninformative ones hardly contribute tracking task hand still consume resources issue investigated addressed via development sensor selection schemes whose goal select best set sensors tracking task satisfying performance resource constraints sensor selection problem target localization target tracking considered among others sensor sets selected get desired information gain reduction estimation error target state mutual information entropy considered performance metric sensors lowest posterior lower bound pcrlb inverse fisher information selected authors compared two sensor selection criteria namely pcrlb sensor selection problem based quantized data showed pcrlb based sensor selection scheme achieves similar mean square error mse significantly less computational effort sensor selection problem formulated integer programming problem relaxed solved convex optimization sensor selection strategy reformulating kalman filter proposed able address different performance metrics constraints available resources authors aimed find optimal sparse collaboration topologies subject certain information energy constraint context distributed estimation complete october draft literature review sensor management target tracking see references therein previous research sensor selection assumes wsns operate reliably target tracking process without interruptions fact situations sensor observations quite uncertain example sensors may temporary failure may abrupt changes operating environment interference traffic may change power received sensors moreover random interruptions may appear communication channels system adversaries may jam wireless communications using different attack strategies types uncertainties would result set sensor observations insufficient information target fusion center words uncertain wsn sensor observations may contain useful information regarding target certain probability important investigate sensor selection problem uncertain environment work study uncertainty caused occlusions sensors may able observe target blocked obstacles regarding representation type uncertainty authors introduced stochastic model sensor measurements furthermore work generalized model multiple sensors considering realistic viewpoint sensors different uncertainty different time instants problems involving uncertain wsns even though studies kalman filter target tracking target localization problem channels sensor selection problem wsns uncertain sensor observations considered literature subject paper aforementioned literature sensor selection schemes require priori information number sensors selected time denoted computationally efficient algorithms developed order find optimal sensors achieve maximum performance gain realistically many applications like target tracking unlikely number sensors need selected time step tracking known system designer operation begins therefore quite necessary important investigate sensor selection strategies determine optimal number sensors selected well sensors select based wsn conditions sensor network design usually involves consideration multiple conflicting objectives maximization lifetime network inference performance minimizing october draft cost resources energy communication deployment costs problems investigate among conflicting objective functions called multiobjective optimization problems mops preliminary work sensor selection method utilizing performance metric mop framework presented assumption sensors wsn reliable work optimized two objectives simultaneously minimization total number sensors selected time minimization information gap sensors transmit measurements selected sensors transmit measurements work investigate sensor selection problem uncertain wsn generalize approach presented addressing issues arise due uncertainty see paper based selection scheme fiss tends select sensors relatively close target based selection scheme miss selects sensors high sensing probabilities achieves better performance better performance miss comes along high computational complexity thus propose use mutual information upper bound miub performance metric sensor selection problem complexity computing miub similar evaluating much lower computing also show simulation experiments miub based selection scheme miubss hardly degrades tracking performance furthermore consider sensor selection problem uncertainty mop framework nondominating sorting genetic applied dynamically select optimal set sensors time step numerical results show miubss selects sensors fiss mop framework also compare framework sensor selection methods weighted sum method convex optimization method show compromise solution discussed later paper adaptively decides optimal number sensors time step tracking achieves satisfactory estimation performance obtaining savings terms number sensors rest paper organized follows section introduce uncertain wsn system model target tracking framework using particle filter given section iii section performance metric sensor selection introduced comparisons performed numerical experiments section review fundamentals mop apply solve proposed mop also investigate october draft performance mop framework simulations section section devoted conclusions future research directions ystem model consider target tracking problem moving target emitting reflecting signal area interest tracked wsn consisting sensors target state assumed vector target positions target velocities horizontal vertical directions even though approaches developed paper applicable complex dynamic models assume linear dynamic model fxt state transition matrix gaussian process noise zero mean covariance matrix sampling interval process noise parameter assumed signal emitted target follows power attenuation model thus signal power received sensor located emitted signal power target distance zero signal decay exponent scaling parameter distance target ith sensor time step uncertainty model sensor observations discussed earlier sensor observations may uncertain due sensor failures natural interference random interruptions regarding different uncertainties different probabilistic models paper consider scenario sensor observation october draft uncertainty caused obstacles assume following probabilistic measurement model proposed generalized sensor observation assumed contain noise sensor sense target due obstacles since uncertainty may happen time sensor sensing probability may identical across sensors wsn probability probability sensing probability sensor represents signal amplitude received sensor time step measurement noise assumed independent across time steps across sensors follows gaussian distribution parameters likelihood function sensor measurements given target state simply product sensor likelihood function given follows gaussian distribution probability follows gaussian distribution probability communication fusion center sensors consider following two practical scenarios sensors directly send analog measurements fusion center sensors quantize analog measurements bits transmit quantized data fusion center tracking analog sensor measurements contain complete information observation expense high communication cost hand quantized measurements save communication burden lose information target quantized measurement sensor time step defined set quantization thresholds october draft algorithm sir particle filter target tracking set generate initial particles xst propagating particles obtain sensor data wts updating weights obtained data wts pnst wts normalizing weights wts xst xst resampling xst wts end number quantization levels probability takes value denotes complementary distribution standard gaussian distribution zero mean unit variance exp since sensor measurements conditionally independent likelihood function written product sensor likelihood function iii particle iltering target racking target tracking problem requires estimation target state using sequence sensor measurements nonlinear systems extended kalman filter ekf provides suboptimal october draft solutions however sensor measurements quantized even linear gaussian systems ekf fails provide acceptable performance especially number quantization levels small thus employ sequential importance resampling sir particle filter solve nonlinear target tracking problem analog quantized sensor measurements sir algorithm based monte carlo method used recursive bayesian filtering problems weak assumptions main idea particle filter find discrete representation posterior distribution using set particles xst associated weights wts wts xst dirac delta measure denotes total number particles number particles large enough weighted sum particles based monte carlo characterization equivalent representation posterior distribution resampling step sir particle filter avoids situation one importance weights close zero iterations known degeneracy phenomenon particle filter algorithm provides summary sir particle filtering algorithm target tracking problem analog data denotes number time steps target tracked replaced quantized data utilized transmission ensor election riteria ncertain wsn section present investigate three performance metrics miub sensor selection problem uncertain wsn formulating three performance metrics mathematically analog data quantized data respectively compare respect resulting tracking performance fisher information posterior lower bound pcrlb provides theoretical performance limit bayesian estimator let denote joint probability density function sensor measurements target state let denote estimate pcrlb estimation error represented october draft fisher information matrix shown matrix bayesian estimation composed two parts obtained sensor measurements corresponding priori information furthermore assumption sensor measurements conditionally independent given target state obtained measurements multiple sensors written summation sensor plus prior information dxt jtp jtp matrix priori information represents standard sensor function target state dzi fisher information analog sensor measurement model analog data obtained substituting likelihood function given derivative exp october draft substituting letting denote standard matrix analog data obtained follows exp dzi fisher information quantized sensor measurement model quantized data calculated replacing likelihood function given since derivative likelihood function quantized observations exp exp october draft derive quantized data substituting follows exp exp thus get analog observation model quantized observation model mutual information sensor management target tracking seeks minimize uncertainty estimate target state conditioned sensor measurements entropy defined shannon represents uncertainty randomness estimate target state moreover relationship entropy sensor selection problem target tracking solved maximizing target state sensor measurements given distribution target state likelihood function sensor measureoctober draft ments analog data written dxt dxt dzt dzi dxt entropy sensor measurements conditional entropy sensor measurements given target state similarly quantized sensor measurements written dxt dxt dxt summation taken possible combinations quantized measurements set sensors mutual information upper bound miub computational complexity evaluating set sensors increases exponentially number sensors becomes impractical compute number sensors selected large chain rule october draft sensor target track fig wsn unreliable sensors numbers stars indicate sensor index left sensing probability right described follows show analog data results quantized data similar since conditionally independent given target state form markov chain following data processing inequality thus upper bound use mutual information upper bound miub performance metric sensor selection problem easily shown computational complexity evaluating miub selecting sensors increases linearly computing comparison performance metrics sensor selection numerical experiments subsection compare performance three performance metrics miub sensor selection problem numerical experiments october draft simulation setting simulations consider wsn shown fig sensors deployed roi area current work assume sensing probabilities sensors already known fusion center research learn probabilities iteratively interesting problem considered future generally sensors around target tracks higher sensing probabilities compared sensors wsn highly likely algorithm select sensors owing higher signal power sensing probability interest considering challenging cases test performance algorithm thus assume sensors around target track relatively low sensing probabilities shown figure moreover sensing probabilities may identical sensors environment however sensors sensing probability selection results would similar preliminary work thus consider scenario sensors wsn different sensing probabilities linear dynamical model target given time interval seconds process noise parameter source power variance measurement noise selected sensors quantize observations bits quantized data quantization thresholds selected values evenly partition interval prior distribution state target assumed gaussian mean covariance diag select initial particles drawn mean square error mse used measure errors ground truth estimates mse estimation time step tracking averaged ttotal trials mset ttotal total trial estimated actual target states time sensors highest different time steps first consider analog two quantization communication schemes one monte carlo run sensors highest listed table note since paper matrix consider determinant matrix corresponds area uncertainty ellipsoid interested effect sensors distances target october draft table ensors significant different time steps quantized data quantized data sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor sensor time step analog data sensing probabilities performance metrics thus compute performance metric sensor instead focusing different sets multiple sensors individual sensors miub identical generally quantized data contains less information compared analog data first discuss results analog data quantized data observe table sensors highest identical analog data quantized data means quantization preserves information analog data far sensor selection concerned additionally investigate three distinct time steps compare results time step target relatively close sensors similar distance target sensors significant sensors low sensing probabilities though similar distance target sensors therefore low time step target much closer sensor sensors sensor highest even though low sensing probability time step sensor closest one target low sensing probability sensor second closest higher sensing probability case sensor highest sensor highest quantized data contains much less information target compared analog data quantized data sensing probability sensors affects quantized data thus sensors relatively higher sensing probabilities october draft percentage reliable sensors mse time step fig time step target tracking performance analog data quantized data quantized data mse performance average number reliable sensors selected higher sensors quantized data case shown table therefore conclude analog data quantized data large number quantization levels affected sensing probabilities sensors quantized data small number quantization levels considerably affected sensing probabilities moreover fiss tends select sensors closer target compared miss explained equation corresponding parameters distance target sensors dominates however explanation found words sensor distance target sensing probability number quantization levels important factors determine tracking performance wsns tracking performance fig show performance wsn given fig one sensor selected time step ttotal monte carlo runs fig shows miss better mse performance fiss analog data quantized data explain result investigating percentage reliable sensors fusion center treats sensor unreliable amplitude quite close among experiments check within region october draft mse miub percentage reliable sensors fig time step miub target tracking performance miub selected ones monte carlo trials fig observe monte carlo trials around sensors selected miss reliable around sensors selected fiss reliable explains better estimation performance miss although sensor selection scheme quantized data selects even reliable sensors improvement respect mse performance significant information loss quantization process shown fig sensor selection scheme based analog data best tracking performance quantized data based sensor selection scheme achieves performance close analog data quantized data based sensor selection scheme performs much worse show simulation results quantized data following simulation experiments performance miubss complexity computing miub selecting sensors computing increase linearly much less evaluating increases exponentially fig shows results miss miubss sensors selected observe similar performance miss miubss terms percentage reliable sensors selected schemes mse performance words miubss obtains performance similar miss october draft much lower computational complexity thus next section utilize miubss instead miss multiobjective optimization framework compare fiss ultiobjective ptimization based ensor election section utilize mop framework find sensor selection strategy determine optimal sensor set mathematical description optimization problem given min subject vector decision variables elements define bounds decision variables functions represent equality inequality constraints problem respectively mop solutions satisfying constraints form feasible set optimization problem involving minimization objectives solution dominates solution called pareto optimal solution dominates set pareto optimal outcomes called pareto front technique solving mops minimize weighted sum objectives yields single solution corresponding weights used approach uniform spread weights employed obtain different solutions rarely produces uniform spread points pareto front optimal solutions may become closely spaced hence reducing number design alternatives work sensor selection strategies reflect different two objective functions estimation performance number selected sensors dependent binary decision variables objective functions based fisher information mutual information upper bound miub based objective functions let sensor selection strategy time step elements binary variables sensor selected october draft otherwise number sensors selected time step based sensor selection strategy matrix time step written jtp determine sensor selection strategy solution mop objective functions minimization information gap based measurements sensors based sensor set selected strategy log det log det log det minimization normalized number selected sensors miub based objective functions objective functions based miub similar minimization normalized information gap total miub based sensors miub based sensor selection strategy denotes minimization normalized number selected sensors paper solve mop binary decision variables using multiobjective evolutionary algorithm nondominating sorting genetic algorithm nsga algorithm yields solutions pareto front explore possible tradeoffs conflicting objectives first generates initial population size solution population feasible solution mop problem solution population represented vector elements element binary variable elitist algorithm good solutions always preserved population values october draft objective functions solution population form fitness values solution solutions population sorted based example solutions rank consist solutions solutions rank consist solutions dominated one solutions population two solutions population fitness value sorted based crowding distance closure measure solution neighbors uses rank solution create mating population offspring solutions generated using binary tournament selection selected solutions fitness value solution larger crowding distance selected problem binary decision variables use recombination operator called uniform crossover offspring solutions obtained parent solutions according defined random number along uniform mutation procedure employed uniform mutation offspring solution obtained parent solution according also determined according new population parents offsprings sorted based population size decreased original population size eliminating lower rank solutions remaining solutions fed binary tournament selection operator several generations population preserve solutions near pareto optimal front solution selection front since provides solutions necessary select one particular solution yield desired conflicting october draft knee point sol smi gap smi gap gap gap knee point sol knee point sol compromise sol knee point sol sensor selection compromise sol compromise sol sensor selection sensor selection fig compromise sol sensor selection pareto optimal front obtained using time step miub objectives knee curve introduced solution small decrease one objective associated large increase let two adjacent neighboring solutions front compute slope curve solutions slope arctan problem define zero solution none sensors selected similarly define one solution yields call solution maximizes knee point solution given arg max slope represents solutions near front alternatively utopia point mop defined individual minima objective defined min october draft compromise sol compromise sol knee point sol knee ponit sol mse number active sensors fig compromise sol compromise sol knee point sol knee ponit sol time step tracking performance time step different solution selection methods let point closest utopia point defined compromise solution paper use euclidean distance find compromise solution arg min next section present numerical results numerical experiments mop framework section conduct simulation experiments investigate performance multiobjective optimization method wsn considered subsection shown fig section system parameters also section note population size chosen choose number generations according diversity metric introduced diversity metric measures october draft sum mse mse sum time step time step fig tracking performance mop convex relaxation weighted sum methods mse miub mse extent spread achieved among obtained solutions defined euclidean distances extreme solutions boundary solutions obtained nondominated set observe fiss miubss diversity metric converges generations time steps thus simulation experiments set number generations also running include two extreme solutions zero one solutions initial population pareto optimal front fig present pareto optimal front mop obtained using fig fiss fig shows result miubss interesting note end generations yields different solutions front solution corresponds optimal selection sensors sensors know table distance target sensor plays important role sensing probability fiss time step target relatively close sensor sensor able achieve significant gain time step target relatively close sensors network fusion center relatively large uncertainty target location thus pareto front fiss steeper however compared october draft fiss miubss prefers sensors high sensing probability selects sensors pareto front miubss steep fiss moreover observe compromise solution knee point solution identical pareto front relatively steep solution selection method solution sensor selection strategy choose pareto optimal front determines overall tracking performance fig compare average number active time step tracking mse performance using knee point solution compromise solution miubss fiss mop framework observe similar results miub fiss knee point solution always selects one sensor target tracking thus gives poorer mse performance however sensor selection strategy using compromise solution selects sensors balance tradeoff performance gain miub total number selected sensors thus rest simulations use compromise solution choose sensor selection strategy pareto optimal front recall results shown fig fig miubss selects reliable sensors number sensors selected given furthermore fig shows number sensors selected known miubss tends select sensors fiss mop framework mse performance miubss better fiss convex optimization weighted sum methods fig compare tracking performance based convex relaxation based sensor selection method similar always chooses sensors sensors time step tracking apply convex relaxation method select minimum maximum number sensors selected compromise solution fig minimum number sensors convex relaxation based sensor selection method gives poor tracking performance hand selecting maximum number sensors sensors convex relaxation method negligibly improves mse performance compared mop approach thus compared convex relaxation method multiobjective optimization method gives satisfactory tracking performance saving terms number show number active sensors selected sensors investigate energy cost solution selection method selecting sensors data transmission incurs energy cost october draft miub mse miub miub mse fig time step turn sensors relatively low sensing probabilities selected sensors miubss fiss also compare mse performance mop framework weighted sum approach sensor selection scheme chooses sensors minimize summation objectives simulation results show miubss fig method obtains similar mse performance weighted sum method fiss fig weighted sum method achieves much worse mse performance naive strategy consider naive sensor selection method fusion center turns sensors relatively low sensing probabilities sensor selection fig present results fusion center turns sensors whose sensing probabilities lower threshold pth considered note wsn fig sensors relatively close target turned low sensing probabilities shown previous results miubss prefers select reliable sensors fiss selects sensors close target turning sensors selection performs worse miubss reduces selection alternatives performs better fiss reliable sensors selected closest sensors low sensing probabilities longer available october draft uncertainty uncertainty uncertainty uncertainty mse number active sensors fig uncertainty uncertainty uncertainty uncertainty time step comparison performance wsns without uncertainty comparison performance uncertainty fig present target tracking performance sensors reliable sensors compare results uncertain observations observe uncertain observations fiss miubss achieve worse mse performance though tend select sensors moreover compared fiss miubss selects many sensors uncertain observations therefore achieves better mse performance wsn another instance sensing probabilities fig sensors sensing probabilities distributed reverse manner compared fig sensors around target track relatively high sensing probabilities condition miubss fiss select similar number sensors similar mse performance reason scenario miubss fiss select sensors around target track high sensing probabilities also conducted experiments following scenarios sensors sensing probabilities uniformly distributed sensor measurements higher noise sensor measurements quantized bits since results provide new insights show results paper october draft mse number active sensors fig time step tracking performance miub sensors sensing probabilities reversely deployed onclusion paper proposed multiobjective optimization method sensor selection problem uncertain wireless sensor network wsn target tracking considered three performance metrics fisher information mutual information mutual information upper bound miub objective functions characterizing estimation performance multiobjective optimization problem mop numerical results show miub based selection scheme miubss selects reliable sensors compared based selection scheme fiss saving computational cost compared based selection scheme miss furthermore mop framework shown compromise solution pareto front mop achieves good estimation performance obtaining savings terms number selected sensors work interested finding sensor selection strategy multiobjective optimization method uncertain wsns future work consider application multiobjective optimization method multitarget tracking problem uncertain wsns october draft eferences bokareva kanhere ristic gordon bessell rutten jha wireless sensor networks battlefield surveillance proc conf land warfare yick mukherjee ghosal wireless sensor network survey computer networks vol gungor hancke industrial wireless sensor networks challenges design principles technical approaches ieee trans ind vol otto jovanov wireless sensor networks personal health monitoring issues implementation computer communications vol rowaihy eswaran johnson verma brown porta survey sensor selection schemes wireless sensor networks defense security symposium international society optics photonics wang yao pottie estrin sensor selection heuristic target localization proc int symposium information processing sensor networks acm williams fisher willsky approximate dynamic programming sensor network management ieee trans signal vol hoffmann tomlin mobile sensor network control using mutual information methods particle filters ieee trans autom control vol zhao shin reich dynamic sensor collaboration ieee trans signal vol may zuo niu varshney posterior crlb based sensor selection target tracking sensor networks ieee int conf acoustics speech signal process icassp vol apr sensor selection approach target tracking sensor networks quantized measurements ieee int conf acoustics speech signal process icassp mar masazade niu varshney keskinoz energy aware iterative source localization wireless sensor networks ieee trans signal vol joshi boyd sensor selection via convex optimization ieee trans signal vol ambrosino sinopoli sensor selection strategies state estimation energy constrained wireless sensor networks automatica vol jul liu kar fardad varshney sensor collaboration linear coherent estimation ieee trans signal vol liu fardad masazade varshney optimal periodic sensor scheduling networks dynamical systems ieee trans signal vol june nahi optimal recursive estimation uncertain observation ieee trans inf theory vol hadidi schwartz linear recursive state estimators uncertain observations ieee trans autom control vol hounkpevi yaz robust minimum variance linear state estimators multiple sensors different failure rates automatica vol october draft zhang shi mehr robust weighted filtering networked systems intermittent measurements multiple sensors int adapt control signal vol trappe zhang jamming sensor networks attack defense strategies ieee network vol mariton jump linear systems automatic control crc press costa guerra stationary filter linear minimum mean square error estimator markovian jump systems ieee trans autom control vol sinopoli schenato franceschetti poolla jordan sastry kalman filtering intermittent observations ieee trans autom control vol ozdemir niu varshney channel aware target localization quantized data wireless sensor networks ieee trans signal vol masazade niu varshney keskinoz channel aware iterative source localization wireless sensor networks proc ieee int conf information fusion fusion ieee masazade rajagopalan varshney mohan kiziltas sendur keskinoz optimization approach obtain decision thresholds distributed detection wireless sensor networks ieee trans man cybern part vol apr rajagopalan mohan varshney mehrotra mobile agent routing wireless sensor networks ieee congress evolutionary computation vol vol rajagopalan optimization algorithms sensor network design ieee annual wireless microwave technology conference wamicon apr nasir sengupta das suganthan improved optimization algorithm based fuzzy dominance risk minimization biometric sensor network ieee congress evolutionary computation cec june cao masazade varshney multiobjective optimization based sensor selection method target tracking wireless sensor networks proc ieee int conf information fusion fusion lee uncertainty wireless sensor networks workshop afrl ruan willett marrs palmieri marano practical fusion quantized measurements via particle filtering ieee trans aerosp electron vol january gordon salmond smith novel approach bayesian state estimation ieee proceedings radar signal processing vol iet arulampalam maskell gordon tim tutorial particle filters online bayesian tracking ieee trans signal vol trees detection estimation linear modulation theory part wiley interscience masazade niu varshney dynamic bit allocation object tracking wireless sensor networks ieee trans signal vol ryan tracking control based particle filter estimate aiaa guidance navigation control conference shannon mathematical theory communication acm sigmobile mobile computing communications review vol cover thomas elements information theory october john wiley sons draft zhang efficient sensor selection active information fusion ieee trans man part vol willett tian tracking data fusion handbook algorithms ybs publishing storrs marler arora survey optimization methods engineering struct multidisc optim vol deb pratap agarwal meyarivan fast elitist multiobjective genetic algorithm ieee trans evol vol apr issimakis adam genetic algorithm multidimensional knapsack problem journal heuristics boyd vandenberghe convex optimization october cambridge cambridge university press draft
3
norm knockout method indirect reciprocity reveal indispensable hitoshi isamu satoshi tatsuya mar rissho university department business administration tokyo japan university department business administration tokyo japan rinri institute research center ethiculture studies tokyo japan university vienna faculty mathematics vienna austria hitoshi authors contributed equally work soka abstract although various norms cooperation suggested evolutionarily stable invasion free riders process alternation norms role diversified norms remain unclear evolution cooperation clarify dynamics norms cooperation indirect reciprocity also identify indispensable norms evolution cooperation inspired gene knockout method genetic engineering technique developed norm knockout method clarified norms necessary establishment cooperation results numerical investigations revealed majority norms gradually transitioned tolerant norms defectors eliminated strict norms furthermore cooperation emerges specific norms intolerant defectors knocked introduction reciprocity fundamental mechanism underlies cooperative societies theoretically well known direct reciprocity typified help help attitude promotes cooperative however recent societies high relational mobility indirect reciprocity help somebody else help plays important role promoting cooperation indirect reciprocity therefore focus much research interdisciplinary fields recent many theoretical studies indirect reciprocity explored norms become evolutionarily stable defection invasion free riders several typical norms approaches clarified robust norms maintain cooperative regime norms studies indirect reciprocity regarded assessment rules label action either good bad include tolerant norms assess cooperative behaviors toward defectors strict norms assess behaviors theoretical studies analysing global dynamics norms assume robust norms shared approaches clarified robustness norms invasion norms including free riders norms acceptable population however little known process gradual changes toward cooperation occur new norms emerge compete say process cooperation study indirect reciprocity dealt different norms analysed frequencies population consequence dynamical study individual keeps private image everyone else errors perception implementation included limited strategy space although considered action rules assessment rules possible norms indirect reciprocity studied cooperation evolves fully understood unless evolution norms also considered thus challenging task theoretically understand cooperation formed even collection norms social system cooperation diversity possible indispensable norms needed facilitate evolution cooperation melting pot norms even though norms never become dominant norms could accepted result process common aspects questions addressed possible norms considered combination norms governing group evolve explore dynamics cooperation using different social norms process evolution final version published scientific rreports cite article yamamoto okada uchida sasaki norm knockout method indirect reciprocity reveal indispensable norms sci doi norms transition stricter tolerant norms additionally find set norms seem impact promoting cooperation fundamental allow transition cooperative regime defective regime results optimal tool tackle challenge outlined see methods details model described odd using evolutionary game theoretical framework constructing interaction model based players private rules local information model giving game elucidate dynamics evolution cooperation amid coexistence diverse norms fig conducted numerical simulations possible norm combinations could react four combinations assessment criteria clarify dynamics evolution cooperation melting pot diverse norms figure shows graphs norm population cooperation ratio shown majority undergo alternation strict tolerant norms mostly order figure shows transition norm greatest population ratio many cases majority transitioned state strict majority afterwards majority norm changed tolerant allg contrast shown figs environment errors alternation strict norms tolerant norms observed however likelihood going decreased alternation paths could seen environment without increased important note similar paths toward cooperation observed initially assumed new norms created evolutionary process time cooperation evolves indicates cooperation diversity norms jointly evolve model alternation norms emerge one thing states defection dominant allb bbbb gbbb coexist jointly form majority however bgbb ggbb continue exist minority characteristic groups evaluation rule evaluation rule assesses donors took regardless evaluation recipient states defection dominant adopt strategies consider many partners result cooperation occur part allb norms thus survive lower cost hand cooperation achieved allg gggg ggbg ggbb gggb coexist common characteristic norms evaluation rule thus reciprocally cooperating norms survive gbbg becomes majority temporarily cooperation ratio rises environment without errors belong either group stably exist also rare makes majority temporarily environment errors meanwhile belongs norm groups constantly exist discover several norms indispensable evolution cooperation cooperation emerge without indispensable norms elucidate indispensable norms evolution cooperation propose novel analysis using norm knockout method method enables determine norms indispensable evolution cooperation norm knockout method inspired targeted gene knockout technique used genetic gene knockout genetic technique one organism genes made inoperative used research genes whose sequences known whose functions researchers infer gene function differences knockout animal normal animal simulating evolution utilized method removed one particular norm population understand whether norm indispensable one plays critical role evolution cooperation figure shows cooperation ratio particular norm knocked regardless whether error knocked cooperat ion evolve define indispensable norms evolution cooperation norms knocked average cooperation ratio less generations environment errors indispensable norms environment errors indispensable norms indispensable norm knocked cooperation evolve cooperation evolves alternation strict norms tolerant norms observed shown figs analyse whether alternation also occurs norm knocked population ratio norms typical norms knocked displayed graphs see fig figure shows results cases knocked discovered first condition necessary process cooperation evolves whether antagonize allb norm resists invasion allb appears society exist also society exist antagonize allb found norm indispensable resist allb discussion model offered two major findings evolution cooperation indirect reciprocity one hand essential contribution discovery indispensable norms norm knockout method using norm knockout method able elucidate existence norms indispensable evolution cooperation melting pot norms regardless existence errors indispensable norms addition environment errors indispensable norm interestingly reconciled minorities cooperative regime emerges temporarily become major norms process dynamics call minority norms required evolution cooperation unsung hero norms results clearly illustrate two roles norms one catalyse cooperative regime maintain regime norms evaluation rule play latter role hand discovered alternation norms recent analysis evolutionary stability invasion free riders could identify neither superiority among norms process path cooperation among studies indirect reciprocity first exhaustive theoretical analysis possible norms although several studies addressed comparison two types reciprocal others analyse alternation norms direct find alternation norms also discover indispensable norms required foster indirect reciprocity empirical supports various norms cooperative regime indicates norm plays important role human cooperation consistent simulation show norm survive cooperative regime one approach may provide deep insight evolution cooperation several norms absolutely play essential role order evolve cooperation even though surface seems though directly leading evolution cooperation present work considers single action rule cooperate good defect bad stress role multiple assessment rules however papers stress role multiple action integrating multiple assessment action rules may useful extension paper analyse happens one norm absent population however analysed indispensable combinations norms yet extending norm knockout method combinations norms may also useful extension paper methods section describe details model uses norm knockout method following model description follows odd purpose aim model understand dynamics norms evolution cooperation find indispensable norms without cooperative societies could never emerge particular reveal effect indispensable norms indirect reciprocity using new methodology call norm knockout method utilize giving game simulation entities state variables scales entities model agents play donor recipient giving game spatial structures donor chooses cooperation defection recipient using image donor recipient image either good bad donor image recipient good donor cooperates recipient image bad donor defects group size model agent norm list images agents agent also probability errors payoff game norm agent denoted one four possible assessment combinations two possible alleles locus four assessment combinations first locus gene represents assessment rule agent cooperates good recipient second locus represents assessment rule agent cooperates bad recipient third locus represents assessment rule agent defects good recipient fourth locus represents assessment rule agent defects bad recipient incidentally agents evaluate good instance allg always assesses others good thus genotype gggg using mentioned definition four loci similarly allb described bbbb ggbb ggbg gbbg agent two different types errors one probability agent updating evaluation others inverted errors perception described two probability perform action differently one prescribed action rule errors implementation described evolution process norms involves adopting genetic state variables initialization simulation shown table process overview scheduling simulation runs throughout generations generation consists rounds agents play giving game times donor generation end generation evolve norms using accumulated payoffs obtained generation one round two phases phase play giving games phase update images agents play giving games phase agents update images phase phase agent becomes donor donor randomly chooses recipient players excluding donor chooses whether give benefits recipient time action donor inverted probability donor cooperates pays cost recipient receives benefit phase agent set evaluates updates image agent set new image viewpoint depends action donor last round depends image viewpoint recipient last round time image inverted probability first round generation action regarded random rounds giving game played every generation agents evolve norm evolution process norms involves adopting genetic locus norm independent meaning assessment others adaptive process contain combination elements rather string norms modeled process updating norms string norms rather four different assessment rules enables norms interpreted different situations depending norm genotype first second third fourth loci represent assessment behavior tolerant behavior behavior justified defection punishment respectively agent randomly selects two agents agents including become parents choosing parents adopt roulette selection method roulette selection sets probability distribution agents denotes agent accumulated payoff generation given number donations received generation number donations gave umin means minimum value accumulated payoffs among finally agent updates norm using uniform crossover technique mutation rate locus inverted maintaining diversity norm space design concepts basic principles simulation utilized study indirect reciprocity explore different combinations norms interact produce evolutionary progression towards cooperation emergence cooperative regime situation social dilemma emerges interactions among agents various social norms adaptation agents model play giving game using images others agents update images others using norms every round evolve norms using accumulated payoffs obtained generation norm obtain higher payoff increase population generation objectives objective agents maximize payoff maximize payoff change norm end generation learning agents change norms generation using genetic algorithm fitness agent calculated accumulated payoff generation select parents agent model utilizes roulette selection method interaction interaction agents one one interaction giving game consists donor recipient spatial structures society stochasticity interaction agents stochastic process interaction partners chosen randomly society start simulation agent randomly assigned norm norms observation three indexes used observation average cooperation ratio society transition norms greatest populations population ratio norm initialization start simulation norm agent chosen randomly possible norm combinations first round generation evaluation agents initialized payoff agents initialized input data initialization model include external inputs number agents error ratio benefit cost constant submodels norm knockout method norm knockout method implemented follows knock particular norm norm removed first round generation concretely norm agent evolves norm knocked result adopting process norm agent changed one norms randomly words norm knocked never exist society references trivers evolution reciprocal altruism rev biol axelrod hamilton evolution cooperation science alexander biology moral systems aldine gruyter new york sugden economics rights cooperation welfare basil blackwell oxford kandori social norms community enforcement rev econ stud wedekind milinski cooperation image scoring humans science panchanathan boyd indirect reciprocity stabilize cooperation without secondorder free rider problem nature ohtsuki iwasa define goodness dynamics indirect reciprocity theor biol nowak sigmund evolution indirect reciprocity nature ohtsuki iwasa leading eight social norms maintain cooperation indirect reciprocity theor biol takahashi mashima importance subjectivity perceptual errors emergence indirect reciprocity theor biol pacheco santos chalub simple successful norm promotes cooperation indirect reciprocity plos comput biol ohtsuki iwasa global analyses evolutionary dynamics exhaustive search social norms maintain cooperation reputation theor biol uchida sigmund competition assessment rules indirect reciprocity theor biol uchida effect private information indirect reciprocity phys rev brandt sigmund logic reprobation assessment action rules indirect reciprocation theor biol gilvert troitzsch simulation social scientist open university press roberts evolution direct indirect reciprocity proc soc lond grimm odd protocol review first update ecol modell leimar hammerstein evolution cooperation indirect reciprocity proc soc lond panchanathan tale two defectors importance standing evolution indirect reciprocity theor biol nowak sigmund evolution indirect reciprocity image scoring nature nowak sigmund dynamics indirect reciprocity theor biol lotem fishman stone evolution cooperation individuals nature strepp scholz kruse speth reski plant nuclear gene knockout reveals role plastid division homolog bacterial cell division protein ftsz ancestral tubulin proc natl acad sci usa matsuo jusup iwasa conflict social norms may cause collapse cooperation indirect reciprocity opposing attitudes towards favoritism theor biol lindgren evolutionary phenomena simple dynamics langton taylor farmer rasmussen eds artificial life redwood city zagorsky reiter chatterjee nowak forgiver triumphs alternating prisoner dilemma plos one berg weissing importance mechanisms evolution cooperation proc soc lond swakman molleman ule egas cooperation empirical evidence behavioral strategies evol hum behav brandt sigmund indirect reciprocity image scoring moral hazard proc natl acad sci usa santos santos pacheco social norms cooperation societies plos comput biol sasaki okada nakai evolution conditional moral assessment indirect reciprocity sci sigmund calculus selfishness princeton university press holland adaptation natural artificial systems university michigan press ohtsuki iwasa nowak reputation effects public private interactions plos comput biol acknowledgements acknowledges scientific research also acknowledges kurihara uec japan providing computational resources acknowledges scientific research acknowledges austrian science fund fwf author contributions statement initiated performed project designed project wrote paper approved submission authors reviewed manuscript additional information authors declare competing financial interests tables table state variables initialization simulation variable agent norm image payoff environment description type variable norm agent images agents accumulated payoff giving game errors perception errors implementation types binary real number constant constant number agents generations simulation times playing giving game per generation benefit giving game cost giving game mutation ratio constant constant constant constant constant constant initial value chosen randomly figures giving game norms giving game playing giving game phase donor recipient good recipient bad recipient updating image phase recipient donor observer observer updates donor image donor bad good bad updated donor score good bad good bad typical norms donor action recipient image observers good last interaction donor action recipient image observers last interaction bad allb shunning stern judging image scoring simple standing good allg recipient figure norms cooperation simulation framework donor image recipient good donor gives recipient something personal cost recipient receives benefit nothing happens otherwise updating image phase observer updates evaluation donor basis donor action cooperation defection observer evaluation good bad recipient agent adopts evaluation rule donor depends donor action recipient image combination norm held agent total possible norms phase agent evaluates updates image donors typical norms expressed manner shown table typical norms include shunning gbbb stern judging gbbg image scoring ggbb simple standing ggbg strict norm action bad recipient assessed bad tolerant norm action bad recipient assessed good intermediately strict norm cooperation bad recipient assessed bad defection good contrast use image recipient uses donor action donor previous action evaluates donor good otherwise evaluates donor bad error defective regime cooperative regime errors defective regime cooperative regime figure time series typical simulation runs norms error left panel errors right panel average frequencies norms cooperation overall society black dotted line cooperation ratio parameters allb coexist cooperation emerge allb completely driven invades cooperation ratio abruptly rises time driven cooperation completely achieved permits invasion also coexists tolerant norms gggb allg finally strategies whose norm expressed words norms constantly cooperate cooperation selected past recipient coexist errors perception implementation introduced simulation similar run allb coexist cooperation emerge however cooperation achieved without going error errors figure alternation patterns majority norms replications error left panel errors right panel panel shows transition norms greatest populations round generations cooperation ratio exceeds generations cooperation ratio exceeds total generations sake visibility replication stop calculation allg becomes majority norm state tolerant norms coexist norms greatest population frequently change place thickness arrows corresponds number times alternation norms occurred see supplementary information details alternation norms allg observed stable errors perception implementation introduced simulations similar run shown transition majority norms distinct compared times errors error errors figure cooperation ratio norm knockout method graph shows average cooperation ratio replications typical norm knocked basic parameter set confirm effects errors perception errors implementation two simulations without error executed see supplementary information knockout analysis norms case errors perception errors implementation knocked cooperation evolve also becomes majority brief round process alternation knocked cooperation evolves extent percent even large furthermore knocked range cooperation achieved becomes narrow sufficiently large cooperation evolve case indispensable norm addition conversely knocked cooperation evolves sufficiently large manner gbbb knocked ggbb knocked figure time series typical simulation runs norm knockout method parameters knocked strategy eliminate allb exist knocked exists small population gain superiority allb supplementary information section includes supplementary text supplementary tables supplementary figure text details alternation norms present details alternating majority norms shown fig main text table shows alternation error table shows errors population norm show average population ratio generation replications shown table persistent norms able described details results norm knockout method comprehensively explore indispensable norms knocking norms table shows cooperation ratio norm knocked without errors indispensable norms knocked standard deviations larger cases reason knock produces two contrasting results cooperation dominant defection dominant indispensable norms knocked cooperative regime appears standard deviation small value analysis alternation norms cooperative regime achieved section analysed transition norms greatest populations cooperation ratio exceeds end generations sake understanding mechanism norms cooperation focused duration regime changes defection cooperation main text therefore stopped calculation transition majority norm allg becomes majority norm fig intuitively seems impossible transit majority allg majority norm however state everyone cooperates indiscriminately easily replaced state everyone refuses cooperate clarify whether allg stable state would happen generations considered show transition norms greatest population cooperative regime achieved results tables fig show cooperation regime maintained robustly tolerant norms allg gggb coexist although allg forms majority population norms including gggb ggbg ggbb protect invasion defective norms tables table alternation patterns dominant norms error correspond left panel fig main text row shows transition norms greatest populations period generations cooperation ratio exceed generations cooperation ratio exceeds total generations sake visibility stop calculation allg becomes majority norm fifty replications conducted time alternation majority norms total times could observed cooperative regime cooperation ratio exceeding achieved replications example replications number times transition greatest population followed allg moreover four indicated dash never cooperation ratio exceeding transition pattern dominant strategies allc allc gbgb gggb allc gbgb allc gggb allc gggb alld alld alld gggb allc gbgb allc table alternation patterns dominant norms errors correspond right panel fig main text setting table table transition pattern dominant strategies gggb allc gbgb gggb allc allc gggb allc allc allc gbgb gggb gbgb gggb gggb allc gggb allc gbgb gggb allc gggb gbgb gggb gbgb gggb allc allc gbgb gggb allc gbgb allc gbgb gggb gggb allc table average population ratio generation column shows errors cells obtained averaging results replications results show population norm norms exist norm knockout method used second row shows average cooperation ratio cratio standard deviation generation fourth row shown population norm standard deviation norms described coexist stably norm barely exist indispensable norm also survive included four persistent norms minority four cratio norms bbbb allb bbbg bbgb bbgg bgbb bgbg bggb bggg gbbb gbbg gbgb gbgg ggbb ggbg gggb gggg allg population population table analysis norm knockout method norms table shows cooperation ratio generation norms knocked value shows average cooperation ratio replications standard deviation cells average cooperation ratio less shown red paper call norms indispensable norms error indispensable norms errors two plus indispensable norms knockouted norm bbbb allb bbbg bbgb bbgg bgbb bgbg bggb bggg gbbb gbbg gbgb gbgg ggbb ggbg gggb gggg allg without knockout mean mean table number transitions norms greatest populations error cooperation ratio exceeds end generations simulation runs replications transition counted majority norm superseded norms parameters example transition allg gggb occurs times time alternation norms greatest populations total times could observed allg gggb allg allb allg gggb gggb gggb gggb allg allg gggb allb gggb allg table number transitions norms greatest populations errors cooperation ratio exceeds end generations setting table table parameters gggb allg allg gggb gggb allg allg gggb allg gggb gggb allg figure figure transition diagram norms greatest populations error errors cooperation ratio exceeds end generations panel drawn using data table panel drawn using data table panels show tolerant norms allg gggb coexist majority
9
feebly compact topologies semilattice expn may oleg gutik oleksandra sobol abstract study feebly compact topologies semilattice expn expn semitopological semilattice prove expn following conditions equivalent countably pracompact feebly compact iii compact expn space dedicated memory professor vitaly sushchanskyy shall follow terminology topological space clx intx denote closure interior respectively denote first infinite cardinal set positive integers subset topological space called regular open intx clx recall topological space said quasiregular open set exists open set clx semiregular base consisting regular open subsets compact open cover finite subcover countably compact open countable cover finite subcover countably compact subset every infinite subset accumulation point countably pracompact exists dense subset countably compact feebly compact lightly compact locally finite open cover finite compact dfcc every discrete family open subsets finite see pseudocompact tychonoff continuous function bounded according theorem tychonoff topological space feebly compact pseudocompact also hausdorff topological space feebly compact every locally finite family open subsets finite every compact space every sequentially compact space countably compact every countably compact space countably pracompact every countably pracompact space feebly compact see every space feebly compact see also obvious every feebly compact space compact semilattice commutative semigroup idempotents semilattice exists natural partial order element semilattice put topological semitopological semilattice topological space together continuous separately continuous semilattice operation semilattice topology topological semilattice shall call semilattice topology topology semitopological semilattice shall call topology date february mathematics subject classification primary secondary key words phrases topological semilattice semitopological semilattice compact countably compact feebly compact semiregular space regular space oleg gutik oleksandra sobol arbitrary positive integer arbitrary cardinal put expn obvious positive integer cardinal set expn binary operation semilattice later paper expn shall denote semilattice expn paper continuation study feebly compact topologies semilattice expn expn semitopological semilattice therein compact semilattice expn described proved arbitrary positive integer arbitrary infinite cardinal every countably compact semilattice expn compact topological semilattice also construct countably pracompact quasiregular topology semitopological semilattice discontinuous semilattice operation show arbitrary positive integer arbitrary infinite cardinal semiregular feebly compact semitopological semilattice expn compact topological semilattice paper show expn following conditions equivalent countably pracompact feebly compact iii compact expn space proof following lemma similar lemma proposition lemma every hausdorff compact topological space dense discrete subspace countably pracompact observe proposition arbitrary positive integer arbitrary infinite cardinal every expn functionally hausdorff quasiregular hence hausdorff proposition let arbitrary positive integer arbitrary infinite cardinal every compact expn subset expn dense expn proof suppose contrary exists compact expn expn dense expn exists point space expn clexpn expn implies exists open neighbourhood expn expn definition semilattice expn implies every maximal chain expn finite hence exists point proposition iii subset expn hence compact subspace expn obvious subsemilattice expn algebraically isomorphic semilattice expk positive integer arguments imply without loss generality may assume isolated zero compact semitopological semilattice expn hence assume compact topology expn zero expn isolated point expn next fix arbitrary infinite sequence distinct elements cardinal every positive integer put xnj expn moreover greatest element semilattice expn positive integer also definition semilattice expn implies every element expn exists one element every positive integer proposition iii isolated point expn hence arguments imply infinite discrete family open subset space expn contradicts compactness semitopological semilattice expn obtained contradiction implies statement proposition feebly compact topologies semilattice expn following example show converse statement proposition true case topological semilattices example fix arbitrary cardinal infinite subset denote natural embedding define topology following way elements semilattice isolated points family bdm finite base topology zero simple verifications show hausdorff locally compact semilattice topology compact hence corollary feebly compact remark observe case proposition topological space collectionwise normal countable base hence metrizable urysohn metrization theorem moreover space metrizable infinite cardinal topological sum metrizable space discrete space cardinality remark arbitrary positive integer infinite cardinal unique compact semilattice topology semilattice expn defined example construct expn following way fix arbitrary element expn stronger topology easy see subsemilattice expn isomorphic denote isomorphism fix arbitrary subset every zero element expn assume base bdm topology point coincides base topology assume subset topology generated map observe expn hausdorff locally compact topological space topological sum hausdorff locally compact space homeomorphic hausdorff locally compact space example subspace expn expn obvious set expn dense expn also since subsemilattice zero expn continuity semilattice operations expn expn property topology stronger imply expn topological semilattice moreover space expn compact contains compact subspace arguments presented proof proposition proposition iii imply following corollary corollary let arbitrary positive integer arbitrary infinite cardinal every compact expn point isolated expn expn remark observe example presented remark implies exists locally compact compact semitopological semilattice expn following property point isolated expn expn following proposition gives amazing property system neighbourhoodd zero compact semitopological semilattice expn proposition let arbitrary positive integer arbitrary infinite cardinal feebly compact semilattice expn every open neighbourhood zero expn exist finitely many expn clexpn oleg gutik oleksandra sobol proof suppose contrary exists open neighbourhood zero hausdorff feebly compact semitopological semilattice expn expn clexpn finitely many fix arbitrary expn clexpn proposition iii set open expn hence set expn clexpn open expn proposition exists isolated point expn expn expn clexpn assumption exists expn clexpn since proposition iii sets expn proposition implies exists isolated point expn expn expn clexpn hence induction construct sequence distinct points sequence isolated points expn expn positive integer following conditions hold expn clexpn expn clexpn similar arguments proof proposition imply following family infinite locally finite contradicts feeble compactness expn obtained contradiction implies statement proposition proposition iii implies element expn set semilattice expn hence theorem expn space feebly compact feebly compact semilattice expn hence proposition implies following proposition proposition let arbitrary positive integer arbitrary infinite cardinal feebly compact semilattice expn point expn open neighbourhood expn exist finitely many clexpn main results paper following theorem theorem let arbitrary positive integer arbitrary infinite cardinal expn following conditions equivalent countably pracompact feebly compact iii compact space expn proof implications iii trivial implication iii follows proposition lemma proposition implication follows proposition shall prove implication induction corollary every feebly compact semilattice semitopological semilattice compact hence topological space feebly compact topologies semilattice expn next shall show statements holds positive integers holds suppose feebly compact semilattice expk subspace hausdorff topological space fix arbitrary point arbitrary open neighbourhood since hausdorff exist disjoint open neighbourhoods zero semilattice expk respectively clx hence proposition exists finitely many expk subsemilattice expk algebraically isomorphic semilattice proposition iii theorem feebly compact semilattice assumption induction implies closed subsets implies open neighbourhood expk thus expk space completes proof requested implication following theorem gives sufficient condition compact space feebly compact theorem every quasiregular compact space feebly compact proof suppose contrary exists quasiregular compact space feebly compact exists infinite locally finite family open subsets induction shall construct infinite discrete family open subsets fix arbitrary arbitrary point since family locally finite exists open neighbourhood point intersects finitely many elements also quasiregularity implies exists open subset clx put since family locally finite infinite fix arbitrary arbitrary point since family locally finite exists open neighbourhood point intersects finitely many elements since quasiregular exists open subset clx construction implies closed sets clx clx disjoint hence next put also observe obvious suppose positive integer construct sequence infinite locally finite subfamilies open subsets space sequence open subsets sequence points sequence corresponding open neighbourhoods sequence disjoint subsets following conditions hold proper subfamily iii open subset clx clx clx disjoint oleg gutik oleksandra sobol next put since family infinite locally finite exists subfamily infinite locally finite fix arbitrary arbitrary point since family locally finite exists open neighbourhood point intersects finitely many elements since space quasiregular exists open subset clx simple verifications show conditions hold case positive integer hence induction construct following two infinite countable families open subsets clx positive integer since subfamily locally finite locally finite well also arguments imply clx locally finite families next shall show family vsis discrete indeed since family locally finite theorem union sis closed subset hence point open neighbourhood intersect elements family clx positive integer construction implies open neighbourhood intersects set hence infinite discrete family open subsets contradicts assumption space compact obtained contradiction implies statement theorem finish note simple remarks dense embedding infinite semigroup matrix units polycyclic monoid compact topological semigroups follow results paper let cardinal set define semigroup operation follows semigroup called semigroup units see bicyclic monoid semigroup identity generated two elements subjected condition cardinal polycyclic monoid generators semigroup zero given presentation see obvious case semigroup isomorphic bicyclic semigroup adjoined zero theorem every infinite cardinal semigroup units densely embed hausdorff feebly compact topological semigroup theorem arbitrary cardinal exists hausdorff feebly compact topological semigroup contains monoid dense subsemigroup theorems lemma imply following two corollaries corollary every infinite cardinal semigroup units densely embed hausdorff compact topological semigroup corollary arbitrary cardinal exists hausdorff compact topological semigroup contains monoid dense subsemigroup feebly compact topologies semilattice expn proof following corollary similar theorem corollary exists hausdorff topological semigroup compact square contains bicyclic monoid dense subsemigroup acknowledgements acknowledge alex ravsky referee comments suggestions references arhangel skii spaces base topological spaces mappings riga russian arkhangel skii topological function spaces kluwer dordrecht bagley connell mcknight properties characterizing spaces proc amer math soc banakh dimitrova gutik embedding bicyclic semigroup countably compact topological semigroups topology appl bardyla gutik semitopological polycyclic monoid algebra discr math carruth hildebrant koch theory topological semigroups vol marcel dekker new york basel vol marcel dekker new york basel clifford preston algebraic theory semigroups vols amer math soc surveys providence engelking general topology heldermann berlin gierz hofmann keimel lawson mislove scott continuous lattices domains cambridge univ press cambridge gutik ravsky pseudocompactness products topological brandt semitopological monoids math methods fields reprinted version math sci gutik sobol feebly compact topologies semilattice expn mat stud matveev survey star covering properties topology atlas preprint april ruppert compact semitopological semigroups intrinsic theory lect notes springer berlin urysohn zum metrisationsproblem math ann faculty mechanics mathematics national university lviv universytetska lviv ukraine address gutik ovgutik olesyasobol
4
unit interval editing tractable yixin dec abstract given graph integers unit interval editing problem asks whether transformed unit interval graph vertex deletions edge deletions edge additions give algorithm solving problem time log denote respectively numbers vertices edges therefore tractable parameterized total number allowed operations algorithm implies tractability unit interval edge deletion problem also present efficient algorithm running time another result algorithm unit interval vertex deletion problem significantly improving algorithm van hof villanger runs time introduction graph unit interval graph vertices assigned intervals real line edge two vertices corresponding intervals intersect important applications unit interval graphs found computational biology data mainly obtained unreliable experimental methods therefore graph representing raw data unlikely unit interval graph important step understanding data find fix hidden errors purpose various graph modification problems formulated given graph vertices edges set modifications make unit interval graph particular edge additions also called completion edge deletions used fix false negatives false positives respectively vertex deletions viewed elimination outliers thus three variants known modification problems unit interval graphs well studied framework parameterized computation parameter usually number modifications recall graph problem nonnegative parameter tractable fpt algorithm solving time computable function depending problems unit interval completion unit interval vertex deletion shown fpt kaplan van bevern respectively contrast however parameterized complexity edge deletion version remained open date settle indeed devise parameterized algorithms deletion versions theorem problems unit interval vertex deletion unit interval edge deletion solved time respectively algorithm unit interval vertex deletion significantly improves currently best parameterized algorithm takes time another algorithmic result van hof villanger algorithm problem improve following theorem approximation algorithm approximation ratio minimization version unit interval vertex deletion problem structures recognition unit interval graphs well studied well understood known graph unit interval graph contains claw depicted fig hole induced cycle least four vertices unit interval graphs thus subclass chordal graphs graphs containing holes modification problems department computing hong kong polytechnic university hong kong china supported part hong kong research grants council rgc grant national natural science foundation china nsfc grants hong kong polytechnic university polyu grant european research council erc grant claw figure small forbidden induced graphs chordal graphs unit interval graphs among earliest studied problems parameterized computation study closely related example algorithm kaplan unit interval completion natural algorithm chordal completion specifically combinatorial result minimal ways fill holes better analysis shortly done cai also made explicit use search disposing finite forbidden induced subgraphs observation parameterized algorithm marx chordal vertex deletion problem immediately imply tractability unit interval vertex deletion problem one may break first induced claws call marx algorithm using hereditary property unit interval graphs graph class hereditary closed taking induced subgraphs however neither approach adapted edge deletion version simple way compared completion needs add edges fill hole length arbitrarily large hole fixed single edge deletion hand deletion vertices leaves induced subgraph allows focus holes claws eliminated however deletion edges fix holes claw graph may introduce new claws therefore although parameterized algorithm chordal edge deletion problem also presented marx obvious way use solve unit interval edge deletion problem direct algorithms unit interval vertex deletion later discovered van bevern van hof villanger using approach first phase algorithms breaks forbidden induced subgraphs six vertices note differentiates aforementioned simple approach breaks claws although phase conceptually intuitive rather nontrivial efficiently carry simple way introduces factor running time approaches diverse completely second phase van bevern used complicated iterative compression procedure high time complexity van hof villanger showed first phase problem solvable main observation van hof villanger connected claw graph proper graph whose definition postponed section conference presentation villanger first announced result claimed settles edge deletion version well however claimed result materialized appears neither conference version single author significantly revised extended journal version unfortunately unsubstantiated claim get circulated although algorithm van hof villanger nice simple proof excruciatingly complex revisit relation unit interval graphs subclasses proper graphs study structured way particular observe unit interval graphs precisely graphs chordal graphs proper helly graphs matter fact unit interval graphs also viewed unit helly interval graphs proper helly interval graphs thereby making natural subclass proper helly graphs full containment relations summarized fig reader unfamiliar graph classes figure may turn appendix brief overview observations inspire show connected claw graph proper helly graph easy adapt certifying recognition algorithms proper helly graphs detect induced claw one exists completely eliminated graph proper helly graph easy solve unit interval vertex deletion problem linear time likewise using structural properties proper helly graphs derive algorithm unit interval edge deletion straightforward use simple branching develop parameterized algorithms stated theorem though nontrivial analysis required obtain time bound unit interval edge deletion van bevern showed unit interval vertex deletion problem remains normal helly proper helly chordal interval unit helly unit interval proper interval figure containment relation related graph classes normal helly chordal interval proper helly chordal unit helly chordal unit interval claw graphs deriving algorithm problem claw graphs van hof villanger asked complexity claw free graphs somewhat intriguing mention claw graphs note claw graph necessarily proper helly graph evidenced fig answer question characterizing connected claw graphs proper helly graphs show graph must like keep one vertex twin class vertices twin class closed neighborhood original graph obtain routine solve problem linear time theorem problems unit interval vertex deletion unit interval edge deletion solved time claw graphs remark techniques developed previous work also used derive theorems techniques designed interval graphs nevertheless far complicated necessary applied unit interval graphs approach used current work based structural properties proper helly graphs tailored unit interval graphs hence simpler natural another benefit approach enables devise parameterized algorithm general modification problem unit interval graphs allows three types operations formulation generalizes three modifications also natural viewpoint aforementioned applications data different types errors commonly found coexisting indeed assumption input data contain single type errors somewhat counterintuitive formally given graph unit interval editing problem asks whether set vertices set edges set deletion addition make unit interval graph show fpt parameterized total number allowed operations theorem unit interval editing problem solved time log large algorithm unit interval editing uses approach however able show solved polynomial time proper helly graphs therefore first phase use brute force remove claws also holes length high exponential factor running time due purely phase every hole length least fixed deleting vertex edge manage show minimal solution reduced graph add edges problem solved linear time study general modification problems initiated cai observed problem fpt objective graph class finite number minimal forbidden induced subgraphs challenging thus devise parameterized algorithms graph classes whose minimal forbidden induced subgraphs infinite prior paper known nontrivial graph class general modification problem fpt chordal graphs theorem extends territory including another graph class corollary theorem implies tractability unit interval edge editing problem allows edge operations vertex deletions see simply try every combination long exceed given bound organization rest paper organized follows section presents combinatorial algorithmic results claw graphs sections present algorithms unit interval vertex deletion unit interval edge deletion respectively theorems section extends solve general editing problem theorem section closes paper discussing possible improvement new directions appendix provides brief overview related graph classes well characterizations forbidden induced subgraphs claw graphs graphs discussed paper undirected simple graph given vertex set edge set whose cardinalities denoted respectively input graphs paper assumed nontrivial connected hence use denote hole vertices add new vertex make adjacent vertices hole end respectively hole denoted complement graph graph defined vertex set pair vertices adjacent depicted fig complement interval graph intersection graph set intervals real line natural way extend interval graphs use arcs circle place intervals real line intersection graph arcs circle graph set intervals arcs called interval model arc model respectively specified endpoints paper intervals arcs closed distinct intervals arcs allowed share endpoint model restrictions sacrifice generality unit interval model unit arc model every interval arc length one interval arc model proper interval arc properly contains another interval arc graph unit interval graph proper interval graph unit graph proper graph unit interval model proper interval model unit arc model proper arc model respectively forbidden induced subgraphs unit interval graphs long known theorem graph unit interval graph contains claw hole clearly interval model viewed arc model leaving point uncovered hence interval graphs always graphs unit model necessarily proper way hold true general result states proper interval model always made unit thus two graph classes coincide fact heavily used present paper proofs consist modifying proper arc model proper interval model represents desired unit interval graph hand easy check proper graph unit graph therefore class unit graphs proper subclass proper graphs arc model helly every set pairwise intersecting arcs common intersection graph proper helly arc model proper helly theorem graph proper helly graph contains claw following immediate theorems corollary proper helly graph chordal unit interval graph theorems one also derive following combinatorial result since prove stronger result theorem implies omit proof proposition every connected claw graph proper helly graph note proposition well combinatorial statements follow need graph connected graphs closed taking disjoint unions proper helly graph chordal necessarily connected words disconnected proper graph must unit interval graph reason choose unit proper title paper twofold one hand applications interested naturally represented unit intervals hand want avoid use proper interval subgraphs ambiguous proposition turned algorithmic statement say recognition algorithm graph class certifying provides minimal forbidden induced subgraph input graph determined class certifying algorithms recognizing proper helly graphs reported lin cao one derive algorithm detecting induced claw graph proper helly graph would suffice develop main results even would take pain prove slightly stronger results proposition graphs denotes set claw purpose threefold first enable answer question asked van hof villanger complexity unit interval vertex deletion graphs thereby accurately delimiting complexity border problem second see disposal would otherwise dominate second phase algorithm unit interval edge deletion excluding enables obtain better exponential dependency running time third combinatorial characterization might interest true twin class graph maximal set vertices closed neighborhood graph called fat precisely six twin classes becomes remove one vertices twin definition vertices twin class induce clique five cliques corresponding hole fat hole clique hub fat theorem let connected graph either fat proper helly graph time either detect induced subgraph partition six cliques constituting fat build proper helly arc model proof prove assertion using algorithm described fig correctness implies assertion algorithm starts calling certifying algorithm cao recognizing proper helly graphs step enters one steps based outcome step subscripts vertices hole understood modulo condition steps satisfied either proper helly arc model subgraph returned correctness steps straightforward step find path connected possibly irrelevant steps note also step theorem algorithm passes steps outcome step let hole let vertex steps either detect induced subgraph partition six cliques constituting fat step scans vertices one one proceeds based adjacency step make means proceed exactly step note situation step satisfied adjacent four vertices none steps applies precisely three neighbors consecutive handled step steps take time steps need time condition step true takes time always terminates algorithm applying otherwise step never called rest step scans adjacency list vertex hence takes time total therefore total running time algorithm concludes proof implied theorem connected claw graph proper helly graph turns implies proposition point natural question appealing relation connected claw graphs unit helly graphs recall class unit interval graphs subclass unit helly graphs similar statement corollary unit helly graph chordal unit interval graph however connected claw graph unit helly graph constructed follows starting edge hole add new vertex two new edges actually graph defined tucker see also therefore proposition theorem best expect sense interestingly study proper graphs chordal hell showed connected claw graph contains must fat defined analogously fat algorithm input connected graph output proper helly arc model subgraph six cliques making fat call recognition algorithm proper helly graphs proper helly arc model found return claw found return found return contained hole isolated vertex found use search find shortest path xyhi single neighbor return claw two neighbors consecutive say return return claw two nonadjacent vertices outcome step must let subscripts modulo vertex adjacent similar step make single neighbor return claw adjacent return claw else return adjacent return adjacent vertices return else add adjacent vertices six cliques else hereafter let return return claw return claw return return else add return six cliques figure recognizing graphs note proper helly graph thus algorithm theorem yet certifying algorithm recognizing graphs detect induced proper helly circulararc graph need exploit arc model proper helly graph chordal set arcs vertices hole necessarily covers circle minimal interestingly converse holds true true chordal graphs proposition let proper helly graph chordal least four arcs needed cover whole circle arc model proposition forbids among others two arcs corollary let proper helly graph chordal let arc model set arcs minimally covers circle vertices represented induce hole therefore find shortest hole proper helly graph may work arc model find minimum set arcs covering circle model another important step algorithm unit interval editing problem detection special case existent must shortest hole graph model intersection called normal see appendix discussion lemma algorithm finding shortest hole proper helly graph proving lemma need introduce notation interval model interval vertex given left right endpoints respectively always holds arc model arc vertex given ccp ccp counterclockwise clockwise endpoints respectively points arc model assumed nonnegative particular inclusive exclusive perimeter circle point possibly ccp arc necessarily passes point note rotating arcs model change intersections among thus always assume particular arc contains avoids point say arc model graph canonical perimeter circle every endpoint different integer given arc model make canonical linear time sort endpoints radix sort replace indices order point interval model arc model defines clique denoted respectively set vertices whose intervals arcs contain distinct cliques defined model helly include maximal cliques graph since set endpoints finite point interval arc model find small positive value endpoint words endpoint endpoint note value understood function depending model well point instead constant let graph let proper helly arc model exactly one ccp contained proposition thus define relation pair intersecting arcs understood viewpoint observer placed center model say arc intersects arc left denoted set arcs whose union arc covering circle corresponding vertices induce connected unit interval graph ordered unique way find leftmost counterclockwise rightmost clockwise arcs vertex let denote length shortest holes defined hole contains following important proof lemma lemma let proper helly arc model graph let sequence vertices arc rightmost arcs containing contained hole hole length containing consecutive vertices proof suppose hole exists smallest number hole length contains assumption hole length let case assume given way corollary set arcs cover circle since assumption arc covers ccp note otherwise therefore arcs cover circle well corollary subset vertices induces hole since adjacent left subset contain hole containing shorter hence contains contradiction therefore exist hole length containing since adjacent vertices consecutive hole let fixed point proper helly arc model according corollary every hole needs visit vertex therefore find shortest hole suffices find hole length min proof lemma algorithm described fig finds hole length min step creates arrays starting distinct vertex ordered counter clockwise endpoints increasing main job algorithm done step step new vertex processed current array new vertex added one array array either dropped extended use dummy vertex means new vertex met last one put previous array step records last scanned arc clockwise endpoint last vertex current array met appended step note algorithm input proper helly arc model graph output shortest hole make canonical ccp ccp create new array arrays circularly linked next last array first one first array last vertex ccp continue delete next array append next array till last array first last vertices adjacent return return last array ignore rest iteration proceed next iteration figure finding shortest hole proper helly graph clockwise arc contains hand drop array consideration step step one arrays already induces hole first last vertices adjacent returned step otherwise induces hole step returns hole induced verify correctness algorithm suffices show length found hole min following hold array pair consecutive vertices arc rightmost arcs containing end algorithm dropped last vertex let vertices inferred vertex adjacent may may adjacent vertices induce hole otherwise induces hole lemma hole length arrays created step may dropped step note step processes arrays circular order starting first one array either deleted step extended adding one vertex step moment algorithm sizes two arrays differ one particular end step current array first well succeeding arrays one less element predecessor ensures hole returned step shortest among holes decided remaining arrays step remains argue array deleted step found hole longer first vertex status following referred moment deleted step deletes let last vertex let array immediately preceding let last vertex note arc counterclockwise endpoint otherwise would deleted therefore arc intersecting right also intersects right lemma hole length contains find hole length follows first array replace otherwise replace easy verify replacement remains hole length analyze running time algorithm endpoints scanned vertex belongs one array using linked list store array addition new vertex implemented constant time using circularly linked list organize arrays find next array delete current one constant time endpoints arcs given adjacency pair vertices checked constant time thus step takes time follows algorithm implemented time vertex deletion say set vertices hole cover chordal hole covers proper helly graphs characterized following lemma lemma let proper helly arc model graph set hole cover contains point proof vertex set subgraph also proper helly graph set arcs proper helly arc model direction may rotate make setting gives proper interval model direction note thats find minimal set vertices covers whole circle according corollary induces hole remains noting local part proper helly arc model behaves similarly interval model lemma easy extension clique separator property interval graphs hand get unit interval graph fat suffices delete smallest clique fat hole therefore theorem lemma imply following algorithm corollary unit interval vertex deletion problem solved time proper helly graphs graphs ready prove main results section theorem parameterized algorithm unit interval vertex deletion problem approximation algorithm approximation ratio minimization version proof let instance unit interval vertex deletion may assume unit interval graph parameterized algorithm calls first theorem decide whether induced subgraph based outcome solves problem making recursive calls calling algorithm corollary induced subgraph found calls times new instance since need delete least one vertex original instance least one instances otherwise algorithm calls corollary solve correctness algorithm follows discussion corollary subgraph vertices recursive calls made parameter value theorem recursive call made time call corollary takes time therefore total running time approximation algorithm adapted parameterized algorithm follows subgraph found theorem delete vertices continue process remaining graph call corollary solve optimally subgraph vertices thus subgraphs detected deleted taking time hence total corollary takes another time total running time thus approximation ratio clearly edge deletion inspired lemma one may expect similarly nice local point arc model minimal set edges whose deletion proper helly graph makes chordal nevertheless case shown fig may behave strange pathological way figure set edges solid dashed spans proper helly graph set dashed edges deleted rely reader verify minimality remaining graph solid edges unit interval graph note four edges would suffice recall means arc intersecting arc left point proper helly arc model define following set edges one may symmetrically view max easy verify following gives proper interval model ccp ccp otherwise perimeter circle see fig arbitrary point model given analogously may rotate model first make arc model interval model given figure illustration proposition proposition let proper helly arc model graph point subgraph unit interval graph direction involved challenging unit interval graph called spanning unit interval subgraph called maximum largest number edges among spanning unit interval subgraphs prove maximum spanning unit interval subgraphs certain property use following argument contradiction given spanning unit interval subgraph property locally modify unit interval model proper interval model represented graph satisfies recall always select way endpoint thus arc covering must contain lemma let proper helly arc model graph maximum spanning unit interval subgraph deleted edges point proof let unit interval model let set deleted edges find first vertex satisfying least one following conditions sets ccp disjoint edges sets disjoint edges recall vertex ccp vertex see fig two conditions imply ccp ccp figure illustration proof lemma respectively edges belongs ccp let leftmost interval note proposition arcs cover whole circle separate component take vertex leftmost arc satisfies condition otherwise let last interval containing let next interval leftmost interval intersect see fig intervals intersect isolated maximum moving right intersect would otherwise make unit interval model represents subgraph one edge argue contradiction vertices position must excludes possibility hand excluded proposition arcs cover whole circle therefore likewise either impossible see fig let vertex rightmost satisfies condition similarly vertex leftmost satisfies condition noting conditions symmetric assume vertex found satisfies condition symmetric argument would apply condition note selection leftmost arc among every satisfying thus setting intervals would produce another unit interval model rest proof may assume without loss generality first interval since model proper helly arc contain ccp words ccp disjoint ccp comprises precisely edges see fig proposition ccp ccp adjacent ccp equal case proof complete ccp hence focused edges ccp claim let ccp adjacent intersect intervals proof recall condition fact adjacent let vertex rightmost interval let leftmost interval intersecting note otherwise moving right intersect would make unit interval model represents subgraph one edge suppose contrary claim intersects interval intersects see fig since satisfies condition proposition let vertex leftmost interval since satisfies condition assumption adjacent proposition let leftmost arc ccp lies vertex exists candidate proposition let denote set vertices make new interval model resetting intervals vertices since every vertex adjacent least one proposition union arcs cover circle arcs thus viewed proper interval model new intervals adapted arcs formally specified follows left endpoint set ccp ccp vertex set ccp ccp ccp selection arcs thus pairwise intersecting proposition cover whole circle thus number vertices right endpoints set max max let denote resulting new interval model see proper note new interval contain contained interval left right endpoints intervals ordering counterclockwise clockwise endpoints arcs hence necessarily proper let denote proper interval graph represented want argue would contradict maximum unit interval subgraph conclude proof claim construction thus focus edges incident thus hand equal show induction every edges incident subset base case clear either inductive step otherwise subset verifies consider edges deleted vertex ccp edges incident ccp claim let ccp adjacent strictly edges incident ccp proof vertices consists three parts others claim edges vertices ccp edges ccp ccp edge vertices therefore show claim suffices show let edge since satisfies condition show case intersects right similar argument works case note intervals used disjoint consider first exists another vertex interval intersecting left see fig interval intersects necessarily intersects least one thus proposition vertex must result setting gives another proper interval model represents subgraph since maximum conclude case proof claim concluded second case vertex whose interval intersects left note every interval intersecting represents vertex let vertex leftmost interval let vertex ccp rightmost interval two vertices exist candidates respectively see fig vertex whose interval contains vertex exists contradicts selection also selection interval contains thus setting gives another proper interval model represents subgraph since maximum conclude therefore every vertex ccp less edges incident ccp moreover least one vertex ccp adjacent noting edge ccp incident two vertices ccp follows ccp contradicting maximum unit interval subgraph worth stressing thinnest place arc model respect edges necessarily thinnest place respect vertices see fig example linear number different places check thus edge deletion problem also solved linear time proper helly graphs problem also simple fat figure thinnest points vertices edges respectively theorem unit interval edge deletion problem solved time proper helly graphs graphs proof may assume input graph unit interval graph according corollary chordal build proper helly arc model without loss generality assume canonical according lemma problem reduces finding point minimized suffices consider points calculate first deduce follows clockwise endpoint arc otherwise ccp vertex difference set edges incident particular note initial value calculated time vertex adjacency list scanned exactly follows total running time may assume input graph connected otherwise work components one one according theorem either proper helly graph fat former case considered assume fat let five cliques fat hole let hub may look maximum spanning unit interval subgraph pair vertices argue existence subgraph definition let maximum spanning unit interval subgraph assume without loss generality may change deleted edges incident make another subgraph neighbors graph clearly unit interval graph less edges operation applied pair twin class violate earlier pair repeating finally end desired maximum spanning unit interval subgraph therefore always subscripts modulo deleting edges cliques together edges one leaves maximum spanning unit interval subgraph sizes six cliques calculated done time minimum set edges decided constant time therefore total running time proof complete indeed hard see proof theorems every maximum spanning unit interval subgraph fat keeps six twin classes satisfied weaker statement sufficient algorithm theorems already imply branching algorithm unit interval edge deletion problem running time constant decided edges however closer look tells deleting single edge introduces either claw forces delete edge disposal similar labels used following proof given fig proposition let spanning unit interval subgraph graph let must contains least two edges triangle involving set contains either edge least two edges triangle proof consider intervals unit interval model remains triangle interval length less disjoint least one words edges incident otherwise triangle assume without loss generality adjacent therefore least two edges triangle cases symmetric contains none three edges contains least two edges triangle otherwise claw observation refined analysis yield running time claimed theorem algorithm goes similarly parameterized algorithm unit interval vertex deletion used proof theorem theorem unit interval edge deletion problem solved time proof algorithm calls first theorem decide whether exists induced subgraph based outcome solves problem making recursive calls calling algorithm theorem claw found algorithm makes respectively calls new instance parameter value deleting one edge claw algorithm branches deleting two edges triangle involving vertex since three triangles three options algorithm makes calls parameter value algorithm makes calls parameter value deleting edge another parameter value deleting two edges triangle verify correctness algorithm suffices show spanning unit interval subgraph least one recursive call generates graph satisfying obvious recursive calls made claws follows proposition recursive calls made standard technique easy verify recursive calls made time moreover algorithm theorem called times follows total running time algorithm dominates branching step disposal technique author developed one may slightly improve running time constant avoid blurring focus present paper omit details general editing let let set edges set respectively say editing set deletion addition create unit interval graph size defined say smaller hold true least one inequality strict unit interval editing problem formally defined follows input task graph three nonnegative integers either construct editing set size report set exists remark necessary impose quotas different modifications stated though cumbersome way since vertex deletions clearly preferable edge operations problem would computationally equivalent unit interval vertex deletion single budget total number operations large algorithm unit interval editing problem also uses approach previous algorithms main discrepancy lies first phase satisfied proper helly graph graph particular also want dispose holes precisely holes fixable merely adding edges recall least edges needed fill special cases fat hard solve fat make rest focused also simplify presentation also exclude cases disposing first phase graph called reduced contains claw proposition reduced graph proper helly graph hence happens chordal must unit interval graph corollary terminate algorithm otherwise algorithm enters second phase reduced every minimal forbidden induced subgraph hole fixed deleting vertices edges exploit proper helly arc model according lemma exists point model suffices delete vertices results subgraph unit interval graph therefore may assume hereafter point exists remains reduced vertex deletions result delete edges well consider minimal editing set reduced graph easy verify minimal editing set reduced graph particular needs intersect holes use shorthand arc model proper helly one may want use lemma find minimum set edges point finish task however lemma ruled possibility delete less edges break long holes subsequently add edges fix incurred subgraphs claw need following lemma lemma let minimal editing set reduced graph proof may assume without loss generality otherwise suffices consider inclusionwise minimal editing set still reduced graph let proper helly arc model let minimal subset every hole union arcs vertices cover circle argue existence showing satisfies condition suppose contradiction exists hole whose arcs cover circle find minimal subset covers circle corollary subset least vertices thus length hole least fixed addition edges harder part argue already unit interval graph together minimality would imply suppose contradiction claw hole find three vertices follows corollary fact least six arcs required cover circle result arcs set six vertices covers circle must subgraph claw therefore cover whole circle claw hand selection also true hole thus unit interval graph find two vertices find shortest path path one inner vertex makes hole together unit interval graph would imply exists inner vertex path consider new pair accordingly note distance smaller hence repeating argument times end two vertices distance precisely desired common neighbor minimality exists hole arcs vertices cover circle hole necessarily passes denote note intersects since proper helly cover circle moreover happen intersects arcs simultaneously find vxp vxq vxi every possibly vxp makes hole union arcs covers circle contradicting definition concludes proof therefore reduced graph always solution add edge lemma editing set always find point model use replace use vertices close point replace therefore problem boils find weak point arc model observation formalized following lemma point result stronger required algorithm present current form interest see section discussions lemma given proper helly graph nonnegative integer calculate time minimum number editing set size time find editing set proof may assume chordal otherwise corollary unit interval graph problem becomes trivial empty set suffice let fix proper helly arc model lemma follows lemma point satisfying hence may assume point exists subset vertices remains proper helly graph hence point define editing set taking vertices clockwise arcs argue first minimum cardinality edge set taken among points desired number see fig let editing set size according lemma point deletion makes unit interval graph consider original model note vertex either otherwise replacing vertex end edge removing edge gives editing set size let comprise vertices whose arcs clockwise well first vertices whose arcs immediately right let easy verify also editing set lemma note arcs consecutive let vertex clockwise arc ccp desired point give algorithm finding desired point assume canonical suffices consider points calculate first maintain queue initially set deduce new sets follows clockwise endpoint arc otherwise ccp vertex enqueue dequeue set vertices queue whose size remains different edges incident particular note initial sets found time vertex adjacency scanned exactly total running time concludes proof figure illustration proof lemma consists two thick arcs moving point gives one note general case point identified lemma may thinnest point vertices thinnest point edges specified respectively lemmas indeed different values thinnest points found lemma may different mixed hole covers consists vertices edges thus combinatorial characterization given lemma extends lemmas algorithm used proof similar theorem recall reduced graph proper helly graph thus lemmas following consequence suffices call algorithm returns found editing set otherwise corollary unit interval editing problem solved time reduced graphs putting together steps tractability unit interval editing follows note fill hole need add edge whose ends distance proof theorem start calling theorem subgraph detected branch possible ways destroying contained otherwise disposal proper helly arc model call lemma find shortest hole either delete one vertices edges add one edges subscripts modulo one three parameters decreases repeat two steps parameter becomes negative terminate algorithm returning graph reduced call algorithm corollary solve correctness algorithm follows lemma corollary disposal subgraph recursive calls made parameter therefore total number instances reduced graphs made algorithm follows total running time algorithm log worth mentioning lemma actually implies algorithm unit interval deletion problem allows vertex deletions edge deletions proper helly graphs algorithm general graphs constant even smaller notice problem also easy fat worst cases vertex deletions edge deletions different concluding remarks aforementioned algorithms exploit characterization unit interval graphs forbidden induced subgraphs recently bliznets used different approach produce subexponentialtime parameterized algorithm unit interval completion whose polynomial factor however linear using reduction vertex cover one show vertex deletion version solved time unless exponent time hypothesis fails edge deletion version fpt well one may want ask side belongs evidence favor hard side related graph classes edge deletion versions seem harder vertex deletion counterparts said hard slightly improve constant running time significant improvement would need new observation interesting would fathom limits particular deletion problems solved time polynomial kernels unit interval completion unit interval vertex deletion known using approximation algorithm theorem recently developed kernel unit interval vertex deletion improving one fomin conjecture unit interval edge deletion problem also small polynomial kernel algorithm unit interval editing second nontrivial fpt algorithm general editing problem main ingredient algorithm characterization mixed deletion vertices edges break holes similar study conducted algorithm chordal editing problem contrast lemmas somewhat stronger example shown small forbidden subgraphs fixed edge additions needed together marx conjectured also true chordal editing problem failed find proof little study done mixed deletion vertices edges hope work trigger studies direction deepen understanding various graph classes point although start breaking small forbidden induced subgraphs major proof technique instead manipulating interval models technique combining figure forbidden induced graphs normal helly claw proper normal helly chordal claw proper helly unit interval unit helly claw unit interval proper interval figure forbidden induced subgraphs containment relations related graph classes constructive interval models destructive forbidden induced subgraphs worth study related problems appendix convenience reader collect related graph classes containment relations fig adapted lin note graph classes used present paper graphs defined tucker see also subgraphs introduced main text depicted fig relations fig viewed intersection models arcs intervals forbidden induced subgraphs every minimal forbidden induced subgraph necessarily minimal forbidden induced subgraph subclass example proposition corollary actually properties normal helly graphs normal helly graph chordal every arc model normal helly also true subclass proper helly graphs arc model proper helly graph may proper word caution worth definition proper helly graphs one graph might admit two arc models one proper helly arc model proper helly therefore class proper helly graphs contain graphs proper graphs helly graphs proper subclass similar remark applies normal helly graphs three classes top fig characterizations minimal forbidden induced subgraphs still open third level minimal forbidden induced subgraphs proper circulararc graphs normal helly graphs completely determined tucker cao classes lower levels forbidden induced subgraphs respect immediate given able derive minimal forbidden induced subgraphs classes example characterization unit interval graphs theorem follows characterization interval graphs find claw likewise minimal forbidden induced subgraphs proper helly graphs stated theorem derived proper graphs corollary proper graph proper helly graph must contain clearly contains see contain equivalent check contains two edges another independent vertex directly read fig let denote vertices hole edges vertex hole holes longer vertex references pavol hell chordal proper circular arc graphs discrete mathematics bessy anthony perez polynomial kernels proper interval completion related problems information computation van bevern christian komusiewicz hannes moser rolf niedermeier measuring indifference unit interval vertex deletion dimitrios thilikos editor concepts computer science volume lncs pages springer ivan bliznets fedor fomin marcin pilipczuk pilipczuk subexponential parameterized algorithm proper interval completion siam journal discrete mathematics preliminary version appeared esa hans bodlaender babette van fluiter intervalizing graphs dna physical mapping discrete applied mathematics preliminary version appeared icalp pablo burzyn flavia bonomo guillermo results edge modification problems discrete applied mathematics leizhen cai tractability graph modification problems hereditary properties information processing letters yixin cao linear recognition almost interval graphs krauthgamer pages full version available yixin cao luciano grippo safe forbidden induced subgraphs normal helly graphs characterization detection discrete applied mathematics yixin cao marx chordal editing tractable algorithmica preliminary version appeared stacs xiaotie deng pavol hell jing huang representation algorithms proper graphs proper interval graphs siam journal computing rodney downey michael fellows fundamentals parameterized complexity undergraduate texts computer science springer fedor fomin saket saurabh yngve villanger polynomial kernel proper interval vertex deletion siam journal discrete mathematics preliminary version appeared esa delbert fulkerson oliver gross incidence matrices interval graphs pacific journal mathematics gavril algorithms graphs networks paul goldberg martin golumbic haim kaplan ron shamir four strikes physical mapping dna journal computational biology pim van hof yngve villanger proper interval vertex deletion algorithmica haim kaplan ron shamir robert endre tarjan tractability parameterized completion problems chordal strongly chordal proper interval graphs siam journal computing preliminary version appeared focs yuping yixin cao xiating ouyang jianxin wang unit interval vertex deletion fewer vertices relevant robert krauthgamer editor proceedings annual symposium discrete algorithms soda siam john lewis mihalis yannakakis problem hereditary properties journal computer system sciences preliminary versions independently presented stoc min chih lin francisco soulignac jayme szwarcfiter normal helly graphs subclasses discrete applied mathematics yunlong liu jianxin wang jie jianer chen yixin cao edge deletion problems branching facilitated modular decomposition theoretical computer science marx chordal deletion tractable algorithmica preliminary version appeared marx barry sullivan igor razgon finding small separators linear time via treewidth reduction acm transactions algorithms preliminary version appeared stacs fred roberts indifference graphs frank harary editor proof techniques graph theory proc second ann arbor graph theory pages academic press new york alan tucker structure theorems graphs discrete mathematics yngve villanger proper interval vertex deletion venkatesh raman saket saurabh editors parameterized exact computation ipec volume lncs pages springer gerd wegner eigenschaften der nerven familien phd thesis mihalis yannakakis computing minimum siam journal algebraic discrete methods
8
sep novel evaluation metrics seam carving based image retargeting tam nguyen guangyu gao department computer science university dayton email tamnguyen school software beijing institute technology email guangyugao abstract image retargeting effectively resizes images preserving recognizability important image regions retargeting methods rely good importance maps cue retain remove certain regions input image addition traditional evaluation exhaustively depends user ratings legitimate need methodological approach evaluating retargeted results therefore paper conduct study analysis prominent method image retargeting seam carving first introduce two novel evaluation metrics considered proxy user ratings second exploit salient object dataset benchmark task investigate different types importance maps particular problem experiments show humans general agree evaluation metrics retargeted results importance map methods consistently favorable others fig flowchart seam carving given image importance map different methods namely edge detector human fixation predictor salient object detector removal map later generated highlighting least important seams red lines represented removal seams accordingly retargeted images finally constructed removing red lines reach desired size index seam carving image retargeting visual saliency introduction image retargeting sometimes referred image cropping thumbnailing resizing beneficial practical scenarios facilitating large image viewing small size displays particularly mobile devices challenging task since requires preserving relevant information maintaining aesthetically pleasing image viewers premise task remove indistinct regions retain context salient regions pioneering work setlur propose using importance map source image obtained saliency face detection importance map pixels higher values likely preserved vice versa specified size contains important regions source image simply cropped otherwise important regions removed image fill resulting holes using background creation technique later avidan propose seam carving method based importance map computed gradient magnitude seam carving functions constructing number seams paths least importance image automatically removes seams reduce image size zhang present image resizing method attempts ensure important local regions undergo geometric similarity transformation time image edge structure preserved suh propose general thumbnail cropping method based saliency model finds informative portion images cuts part images marchesotti propose framework image thumbnailing based visual similarity underlying assumption images sharing global visual appearance likely share similar saliency values works dedicated still images chamaret meur propose video retargeting algorithm meanwhile rubinstein extend seam carving video retargeting date existing evaluation scheme mostly depends user ratings however always feasible recruit large pool participants evaluation also mostly impossible get participant pool previous work make fair comparison thus legitimate need automatic way evaluate retargeting methods paper revisit analyze fig two novel metrics namely mean area ratio mean sum squared distances left right original image ground truth saliency map shape points ground truth map retargeted ground truth map cov shape points retargeted ground truth map mean area ratio map mapping two correspondence sets popular method seam carving image retargeting contribution first propose two novel metrics systematically evaluate retargeting algorithms namely mean area ratio mar mean sum squared distances mssd novel metrics focus much shape salient object distorted retargeting process second evaluate various types importance map namely fixation prediction map salient object map edge map newly proposed metrics seam carving revisit proposed evaluation metrics seam carving revisit seam carving popular method image retargeting aims automatically retarget images certain size facilitate viewing purpose aforementioned let image illustrated figure first step computation importance map quantifies importance every pixel image every pixel importance map assigned value within higher values mean higher importance assume landscape image aim reduce width vertical seam path image top bottom containing one pixel per row defined corresponding column row within seam goal find optimal seam minimizes min importance value one seam pixel eqn solved dynamic programming optimal seam later removed input image process repeats image reaches desired dimension worth noting recent years witness rapid popularity smartphones tablets equips people imaging capabilities fact people taking photos different ways traditional filmmakers take photos landscape human figures however mobile phone people prefer take pictures portrait mode due difference people preferences applications like instagram developed meets demands groups people asking crop image square size social media profile images square form facebook twitter one reasonable explanation squared photos display well feed format work utilize seam carving method application called automatically retargets images square size particular seam carving process loops times landscape image reaches expected square size portrait image transpose image use function find optimal vertical seam proposed evaluation metrics order mitigate dependency user ratings propose two additional metrics systematically evaluate retargeting algorithms namely mean area ratio mean sum squared distances motivation users prefer shape salient object preserved image retargeting process discussed shown fig distorted boxes first two rows retargeted images entertained viewers first metric mean area ratio measures much salient object preserved image retargeting simultaneously remove seams original image ground truth saliency map obviously retargeted groundtruth map exactly size retargeted image input image area ratio computed ratio salient regions retargeted ground truth map ground truth salient areas fig left right original image importance maps different methods sobel edge map structured edge map boolean map based saliency bms saliency based region covariance cov color transform hdct discriminative regional feature integration drfi shown fig area ratio whole salient regions retained mean area ratio mar set input images computed area ratios images second metric mean sum squared distances evaluates shape similarity salient regions image retargeting adopt shape contexts measure shape similarity image shape contexts compute shape correspondences two given silhouettes ground truth map retargeted ground truth maps shown fig next distances two correspondence sets summed illustrated fig sum squared distances two shapes identical eventually mean sum squared distances mssd computed across images actually two proposed evaluation metrics complementary mar measures much salient object maintained whereas mssd measures amount distortion image retargeting process selection importance map literature edge map first introduced importance map image retargeting problem additionally importance level measured visual saliency values exist two popular outputs visual saliency prediction namely predicted human fixation map fixation prediction salient object map salient detection literature also exist many efforts predict visual saliency different cues depth matters audio source touch behavior object proposals semantic priors paper consider three types importance maps follows edge map retrieved edge detection process fundamental task computer vision since early early works focused detection intensity color gradients example popular sobel detector computes approximation gradient image intensity function recently dollar proposed structured edge detection formulating problem edge detection predicting local segmentation masks given input image patches work consider different edge detectors fixation prediction map obtained trained models constructed originally understand human viewing patterns actually models aim predict points people look freeviewing natural scenes usually seconds typical fixation map includes several fixation points smoothened gaussian kernel consider using two models namely boolean map based saliency bms saliency based region covariance cov later evaluation salient object map computed models aim detect segment salient object whole note typical map usually contains several regions marked humans recommended extensive survey consider two models namely saliency based discriminative regional feature integration drfi highdimensional color transform hdct fig shows importance maps generated different computational methods note edge maps fixation prediction maps low resolution highlight edges whereas salient object maps focus entire objects evaluation obvious benchmark image retargeting task requires set input images corresponding saliency map requirement elegantly fits settings salient object datasets therefore exploit popular dataset contains images annotated ground truth salient regions evaluation first show visual comparison retargeted images different importance maps observed fig retargeted results salient object detection methods well preserve main salient objects without distortion though fixation prediction general biologically plausible suggests important regions way humans look retargeted images lose details meanwhile retargeted images importance map lose details layout structure next conduct user study evaluate performance retargeted images different input saliency maps previously mentioned dataset run dataset obtain retargeted squared images participants female fig visual comparison retargeted images different importance maps dataset left right original image ground truth saliency map pairs retargeted image retargeted groundtruth saliency map importance maps sobel structured edge bms voc hdct drfi respectively please view high resolution best visual effect table performance different importance maps image retargeting importance map sobel structured edge cov bms hdct drfi user ratings mar mssd university involved experiment set images provided participant note every image set contains random images six retargeted results method randomly labeled hide identities participant requested rate methods scores means bad viewing experience means excellent viewing experience shown table users prefer salient object map methods hdct drfi whereas retargeted results edge map sobel structured edge receive least rating compute two evaluation metrics mar mssd results generally similar user ratings also shown table retargeted images obtained salient object map source consistently favorable others namely achieving highest mar lowest mssd contrary retargeted results edge maps receive lowest mar highest mssd addition compute pearson coefficient correlations defined user ratings two novel metrics note correlation one table pearson coefficient correlation among three metrics user ratings mar mssd user ratings mar mssd user ratings mar mssd ric score shown table ccs user ratings mar negative mssd respectively demonstrates two metrics highly correlated users responses hence proposed metrics used proxy user ratings conclusion future work paper introduce two novel metrics automatically evaluate seam carving image retargeting task utilized salient object dataset benchmark showed newly proposed metrics highly correlated user ratings across six different importance maps also found retargeted results salient object map used importance map consistently favorable others believe new benchmark type evaluation measures lead improved retargeting algorithms well better understanding image retargeting problem future work aim investigate image retargeting operators apart seam carving also would like extend work considering additional cues depth rgbd images motion information videos references vidya setlur saeko takagi ramesh raskar michael gleicher bruce gooch automatic image retargeting international conference mobile ubiquitous multimedia shai avidan ariel shamir seam carving image resizing acm trans vol zhang cheng ralph martin approach image resizing computer graphics forum vol bongwon suh haibin ling benjamin bederson david jacobs automatic thumbnail cropping effectiveness acm uist luca marchesotti claudio cifarelli gabriela csurka framework visual saliency detection applications image thumbnailing international conference computer vision christel chamaret olivier meur attentionbased video reframing validation using international conference pattern recognition michael rubinstein ariel shamir shai avidan improved seam carving video retargeting acm vol erkut erdem aykut erdem visual saliency estimation nonlinearly integrating features using region covariances journal vision vol sobel feldman isotropic gradient operator image processing piotr lawrence zitnick fast edge detection using structured forests transactions pattern analysis machine intelligence vol jianming zhang stan sclaroff saliency detection boolean map approach international conference computer vision jiwhan kim dongyoon han tai junmo kim salient region detection via color transform conference computer vision pattern recognition huaizu jiang zejian yuan cheng yihong gong nanning zheng jingdong wang salient object detection discriminative regional feature integration approach conference computer vision pattern recognition serge belongie jitendra malik jan puzicha shape matching object recognition using shape contexts ieee transactions pattern analysis machine intelligence vol congyan lang tam nguyen harish katti karthik yadati mohan kankanhalli shuicheng yan depth matters influence depth cues visual saliency european conference computer vision yanxiang chen tam nguyen mohan kankanhalli jun yuan shuicheng yan meng wang audio matters visual attention ieee transactions circuits systems video technology vol bingbing mengdi tam nguyen meng wang congyan lang zhongyang huang shuicheng yan touch saliency characteristics prediction ieee transactions multimedia vol tam nguyen salient object detection via objectness proposals aaai conference artificial intelligence tam nguyen jose sepulveda salient object detection via augmented hypotheses international joint conference artificial intelligence tam nguyen luoqi liu salient object detection semantic priors international joint conference artificial intelligence richard duda peter hart pattern classification scene analysis vol wiley new york guner robinson color edge detection annual technical symposium international society optics photonics john canny computational approach edge detection transactions pattern analysis machine intelligence vol ali borji cheng huaizu jiang jia salient object detection benchmark transactions image processing vol radhakrishna achanta sheila hemami francisco estrada sabine salient region detection conference computer vision pattern recognition
1
inflation technique solves completely classical inference problem miguel elie institute quantum optics quantum information iqoqi vienna austrian academy sciences boltzmanngasse vienna austria perimeter institute theoretical physics caroline waterloo ontario canada jul causal inference problem consists determining whether probability distribution set observed variables compatible given causal structure wolfe one introduced hierarchy necessary linear programming constraints observed distributions compatible considered causal structure must satisfy work prove inflation hierarchy complete distribution observed variables admit realization within considered causal structure fail one inflation tests quantitatively show distribution measurable events satisfying nth euclidean norm distribution realizable within given causal inflation test structure addition show corresponding nth relaxation problem consisting maximizing kth degree polynomial observed variables optimal solution introduction bayesian network causal structure directed acyclic graph vertices represent random variables generated function depending value parents nowadays causal structures commonly used bioinformatics medicine image processing sports betting risk analysis experiments quantum nonlocality important remark variables may directly observable others called hidden latent variables due presence latent variables determining whether causal structure may behind statistics set observable events inference problem difficult mathematical question given function probabilities measurable events dual causal inference problem task computing maximum value evaluated probability distributions compatible considered causal structure number less effective heuristics search probabilistic models compatible causality assumptions optimizing functionals thereof koller friedman problem proving impossibility accommodate experimental data within given network bounding values function data still open recent years though many advances see fritz fritz chaves chaves chaves chaves bohr brask chaves wolfe authors presented inflation technique hierarchy necessary constraints verifiable via linear programming distribution realizable within considered causal structure must satisfy notably inflation technique allowed authors derive polynomial inequalities triangle scenario fritz chaves one simplest causal structures inference problem solved generically inflation technique versatiliy practical performance make prominent tool attack causal inference problem inflation technique also leads naturally simple sequence linear programming relaxations dual problem function evaluate polynomial probabilities observed variables paper first study performance inflation method particular type causal structures called causal networks structures show hierarchies relaxations inference dual problems provided inflation technique complete words distribution observed variables passes inflation tests must realizable within considered causal network sequence upper bounds obtained via inflation solution dual problem converges asymptotically next show inference dual problem arbitrary causal structure mapped inference dual problem causal network put together two results imply inflation technique far relaxation alternative way understanding general causal structures paper organized follows section formulate concepts causal networks causal structures introduce inference dual problems section iii review inflation technique solve causal inference dual problems section prove extension finite finetti theorem diaconis freedman distributions admitting inflated extension theorem allow section prove distribution passing nth inflation test euclidean norm feasible distribution within considered causal network theorem also follow straightforwardly nth inflation relaxation dual problem differs optimal value fig generic causal network independent latent variables influence observed variables fig triangle scenario degree considered polynomial section describe mapping causal structures causal networks hence extending results general causal structures finally present conclusions causal networks causal structures causal inference problem causal network type causal structure two layers bottom layer independently distributed latent random variables top layer observable random variables see figure observable distribution generated via functions following notation vector entries represent vector entries consider example causal network dubbed triangle scenario see figure denoting respectively probability distribution realizable triangle scenario generated via functions alternatively realizable triangle scenario iff admits decomposition form general causal structure functions giving rise observed variables also depend observed variables example given instrumental scenario figure left respectively free observable latent variable observed variables generated via functions fig left instrumental scenario right instrumental scenario unpacking causal inference problem consists given observable distribution determine whether admits realization within considered causal structure formally definition causal inference problem given causal structure probability distribution observed variables decide exists probability distribution random variables observed latent compatible causal structure marginal distribution coincides definition causal inference problem different conventional one given probability distribution observed variables one asks causal models accommodate however problems equivalent following hence stick definition illustration consider example input problem causal structure test triangle scenario causal inference problem solved either providing probability distributions holds proving distributions exist contrast call dual inference problem consists given function probabilities observed events maximize value among distributions compatible given causal structure definition dual inference problem given causal structure real function distribution observed variables solve optimization problem max admits realization note definition coincide standard one literature probabilistic graphical models dual problem consists given causal structure identifying set restrictions affect distribution observed variables compatible even simplest causal structures output problem large store normal computer thus paper focus restricted notion dual coming back triangle scenario instance dual problem defined paper would maximizing distributions realizable within triangle scenario exist number variational algorithms solve problem koller friedman similarly exist many heuristics scan possible causal realizations given distribution observed variables however general practical tools demonstrate irrealizability probability distribution derive upper bounds solution dual problem scarce one inflation technique describe next iii quick overview inflation technique let distribution realizable triangle scenario suppose generate independently distributed copies variables could define random variables variables follow probability distribution property fig inflation triangle scenario aij aij bkl bkl cpq cpq aij bkl cpq permutations elements moreover marginal distribution diagonal variables aii bii cii also satisfies identities aii bii cii see figure resulting causal network given arbitrary distribution inflation technique consists demanding existence distribution satisfying called nth order inflation clearly admit nth order inflation realizable triangle scenario deciding existence nth order inflation cast linear program alevras notice distribution satisfying aii bii cii holds permutations therefore distribution subject constraints must fig bilocality scenario permutations elements actually original description inflation technique wolfe imposes constraints rather distribution demanding existence distribution satisfying condition shown enforce exactly constraints demanding existence distribution satisfying indeed noted wolfe distribution satisfying twirled symmetrized see distribution satisfying eqs convenience refer formulation inflation technique involving symmetries formulation added advantage symmetry constraints exploited reduce time memory complexity corresponding linear program see gent inflation technique easy generalize relax property admitting explanation terms arbitrary causal networks remember though causal networks particular type causal structures namely one adds observable variable many subindices latent variables depends makes total probability distribution invariant independent permutations type indices symmetry condition impose vectors independent permutations one latent variable index type denotes tuple subindices variable depends addition one must enforce satisfies compatibility conditions elucidation consider another causal network bilocality scenario fig three random variables defined respectively via functions assume latent variables independently distributed scenario nth inflation corresponds distribution variables bjk range must satisfy linear constraints bjk bjk bjk permutations elements also subject identities bii note inflation technique also suggests simple method solve dual problem see definition indeed probability distribution let represent distribution erated independent samples let qkn denote marginal probability axi axi let polynomial degree probabilities measurable events consider problem maxp maximization understood set distributions realizable considered causal network relax problem max qkn satisfies linear functional immediate cast linear program fact shown wolfe inflation technique partially solves well conventional dual problem one asks constraints distribution compatible considered causal network subject achieved deriving via combinatorial tools facets linear inequalities define set diagonal marginals distributions satisfying applied distribution form linear inequalities translates polynomial inequality satisfied distribution observed variables admitting nth inflation next two sections prove conversely distribution set measurable events admits inflation euclidean norm distribution realizable considered causal network similarly show generalization finetti theorem purpose section prove following result theorem let distribution satisfying symmetry constraints call qkn marginal probability exist normalized probability distributions achievable considered causal network probabilities qkn denotes statistical distance probability distributions proof prove result triangle scenario generalization obvious given arbitrary distribution variables consider symmetrization defined aij aij bkl bkl cpq cpq aij bkl cpq note addition distribution satisfying symmetry condition fulfills symmetrized distribution satisfies let deterministic distribution assigning values random variables since distribution convex combination deterministic points follows distribution satisfying expressed convex combination symmetric distributions form ease notation assume values fixed denote latter distribution simply call marginal verified symmetry given formula notice reproduced triangle scenario indeed latent variables take values uniformly distributed consider marginal distribution symmetry expressed sum taken tuples implies repeated indices move one block variables another compare straightforward time sum contains possible tuples statistical distance two distributions bounded times number tuples indices namely plus times number tuples repeated indices namely result finally let distribution satisfying distribution form every convexity statistical distance qkn extending result general causal networks straightforward sketch proof first action corresponding symmetrization deterministic distribution equals distribution whose uniform mixture tuple indices deterministic distributions form thus follows realizable within causal network marginal also uniform mixture deterministic distributions similar type repeated indices allowed different blocks variables statistical difference thus bounded nkl proof convergence inflation technique ready prove first main result let probability distribution observed variables suppose admits nth inflation define polynomial let linear functional distributions note due conditions invoking theorem previous section implies distributions realizable within considered causal structure words distribution admitting inflation exists realizable distribution euclidean norm since set compatible distributions closed rosset taking limit follows distribution passing inflation tests must realizable finally notice previous argument also used prove convergence sequence linear programs effect let polynomial degree maxp let defined extended finetti theorem certain practical cases may know full probability distribution observable variables probabilities restricted set measurable events apply inflation technique cases rather fixing value probability products like would impose constraint distribution satisfying also dubbed nth order inflation distribution measurable events example consider triangle scenario fig assume experimental setup allows detect events form set measurable events input causal inference problem distribution nth order inflation would satisfy linear conditions aii bii cii proofs convergence presented easily extend scenario indeed choosing polynomial following derivation conclude distribution measurable events admitting nth order inflation euclidean norm realizable distribution similarly one bound speed convergence inflation technique applied maximize polynomials probability distribution measurable events extension results general causal structures far referring causal networks causal structures observed variables depend number independent latent variables however general causal structure value given variable depend latent variables also values observed variables inflation technique solve inference dual problems general structures well following show causal dual inference problem arbitrary causal structure mapped inference dual problem set measurable events extended causal network hence solve via inflation technique general causal structures causal networks follow following three steps fig first step first step called see figure note original structure edge variable latent variable delete edge replace edges variable direct successors obtain causal structure predictive power indeed first structure could carry copy direct successors implies probability distribution observed variables realizable causal structure also realized original causal structure conversely suppose original causal structure depends variables group vector deterministic function internal random variable one simulate probability distribution observed variables causal structure distributing direct successors making compute locally value given sequentially edges pointing latent variable end equivalent causal structure latent variables edges pointing fig second step unpacking second step unpacking figure draw inspiration notion interruption described wolfe sainz let observed variable depending latent variables observed variables suppose take different values step break edges unpack variable observed variables defined via expression unpacking variables arrive causal structure disconnected random variables observable parents causal graph number observed variables depending latent variables previous step independent probabilities observed variables original causal structure obtained probabilities set measurable events new causal structure via relation iia iib respectively superindices variables direct predecessors original causal structure fig third step last step figure introduce new latent variable observable parent original graph edge end step layer observed variables depending another layer independent latent variables none causal network original causal inference dual problem hence mapped inference dual problem causal network set measurable events given dealing inference problem observable parents alternative step simply erase observed parents structure consider resulting causal network constraints applying inflation technique second network computationally cheaper since less observed variables note though mapping allow optimize polynomials one use solve causal dual problem illustration consider instrumental scenario see fig suppose random variables take two possible values define vector variable via function similarly unpack vector resulting causal network unpacking depicted side fig let distribution observed variables new network set measurable events correspondence original distribution alternatively erase variable causal network take distribution observed variables fundamental object set measurable events corresponding probabilities vii conclusion paper proven hierarchy tests proposed wolfe bound set distributions admitting representation given causal network complete sense distribution admit representation fail one tests quantitatively showed distribution passing test euclidean norm distribution realizable within considered causal network also proved linear programming relaxation provided inflation technique solution dual problem away optimality top showed causal inference dual problem general causal structure mapped causal inference dual problem extended causal network set measurable events regime also proved convergence inflation technique put together two results thus show inflation technique much useful machinery derive statistical limits alternative way define causal structures future work would interesting adapt results quantum case inflation technique also applied wolfe would require extension finite quantum finetti theorem koenig renner held certain structures subject certain symmetry constraints arduous task acknowledgements work supported european research council research supported part perimeter institute theoretical physics research perimeter institute supported government canada department innovation science economic development canada province ontario ministry research innovation science references wolfe spekkens fritz inflation technique causal inference latent variables koller friedman probabilistic graphical models principles techniques adaptive computation machine learning mit press fritz beyond bell theorem correlation scenarios new phys fritz chaves entropic inequalities marginal problems ieee trans info theo chaves luft gross causal structures entropic information geometry novel scenarios new phys chaves luft maciel gross janzing inferring latent structures via information inequalities proc conference uncertainty artificial intelligence auai chaves polynomial bell inequalities phys rev lett bohr brask chaves bell scenarios communication phys diaconis freedman finite exchangeable sequences annals probability alevras dimitris linear optimization extensions springer berlin heidelberg gent petrie puget chapter symmetry constraint programming handbook constraint programming foundations artificial intelligence vol edited rossi van beek walsh elsevier rosset gisin wolfe set semialgebraic unpublished wolfe sainz interruption technique causal inference quantum instrumental scenario unpublished koenig renner finetti representation finite symmetric quantum states math phys
10
june revised september report lids incremental aggregated proximal augmented lagrangian algorithms nov dimitri abstract consider minimization sum large number convex functions propose incremental aggregated version proximal algorithm bears similarity incremental aggregated gradient subgradient methods received lot recent attention cost function differentiability strong convexity assumptions show linear convergence sufficiently small constant stepsize result also applies distributed asynchronous variants method involving bounded interprocessor communication delays consider dual versions incremental proximal algorithms incremental augmented lagrangian methods separable optimization problems contrary standard augmented lagrangian method methods admit decomposition minimization augmented lagrangian update multipliers far frequently incremental aggregated augmented lagrangian methods bear similarity several known decomposition algorithms however incremental nature augmented lagrangian decomposition algorithm stephanopoulos westerberg related methods tadjewski ruszczynski alternating direction method multipliers admm recent variations compare methods terms properties highlight potential advantages limitations also address solution separable optimization problems use nonquadratic augmented lagrangiias exponential dually consider corresponding incremental aggregated version proximal algorithm uses nonquadratic regularization entropy function finally propose closely related linearly convergent method minimization large differentiable sums subject orthant constraint may viewed incremental aggregated version mirror descent method incremental gradient subgradient proximal methods consider optimization problems cost function consists additive components minimize def subject dimitri bertsekas dept electr engineering comp science laboratory mation decision systems cambridge convex functions closed convex set focus case number components large incentive use incremental methods operate single component iteration rather entire cost function problems type arise often various practical contexts received lot attention recently suitable algorithms include incremental subgradient method abbreviated cost used place full ponent fik selected iteration arbitrary subgradient subgradient positive stepsize denotes projection important components taken iteration equal frequency using either cyclic random selection scheme methods type properties studied long time relevant literature beginning voluminous list author survey discusses history algorithm convergence properties connections stochastic approximation methods generally diminishing stepsize needed convergence even components differentiable moreover convergence rate properties generally better index selected randomization set deterministic cyclic rule first shown bertsekas see also another method introduced author studied incremental proximal method abbreviated arg min fik method relates proximal algorithm martinet rockafellar way method relates classical nonincremental subgradient method similar method important components taken iteration equal frequency theoretical convergence properties algorithms similar generally believed robust property inherited nonincremental counterpart turns structures methods quite similar important fact regard method equivalently written throughout paper operate within space standard euclidean norm denoted vectors considered column vectors prime denotes transposition scalar coordinates optimization vector denoted superscripts sequences denote subgradient convex function vector iterates indexed subscripts use choice within set vector gradient subgradients clear context differentiable special subgradient new point see bertsekas prop prop prop special subgradient determined optimality conditions proximal maximization example may difficult consistent thus determining special subgradient problem cases preferable implement iteration proximal form rather projected form however equivalent form iteration compared iteration suggests close connection iterations fact connection basis combination two methods provide flexibility case cost components well suited proximal minimization others see incremental aggregated gradient subgradient methods incremental aggregated methods aim provide better approximation subgradient entire cost function preserving economies accrued computing single component subgradient iteration particular aggregated subgradient method abbreviated ias form delayed subgradient earlier iterate assume indexes satisfy fixed nonnegative integer thus algorithm uses outdated subgradients previous iterations components need compute subgradient components iteration ias method first proposed knowledge bertsekas borkar motivated primarily distributed asynchronous solution dual separable problems similar ones discussed section distributed asynchronous context natural assume subgradients used delays convergence result shown assuming stepsize sequence diminishing satisfies standard conditions result covers case iteration case general case admits similar analysis note distributed algorithms involve bounded delays iterates long history common various distributed asynchronous computation contexts including coordinate descent methods see sections note limitation iteration iteration one store past subgradients moreover whatever effect use previously computed subgradients fully manifested subgradient component computed significant number components large note also approaches approximating full subgradient cost function aim computational economies methods see bertsekas references quoted surrogate subgradient methods see bragin references quoted ias method contains special case incremental aggregated gradient method abbreviated iag case components differentiable method attracted considerable attention thanks particularly interesting convergence result favorable case component gradients lipschitz continuous strongly convex shown iag method linearly convergent solution sufficiently small constant stepsize result first given blatt hero gauchman case cost components quadratic delayed indexes satisfy certain restrictions consistent cyclic selection components iteration see also linear convergence result subsequently extended nonquadratic problems various forms method several authors including schmidt roux bach mairal defazio caetano domke several schemes proposed moreover several address limitation store past subgradients experimental studies confirmed theoretical convergence rate advantage iag method corresponding incremental gradient method preceding favorable conditions use arbitrary indexes iag method introduced paper gurbuzbalaban ozdaglar parillo gave elegant particularly simple linear convergence analysis incremental aggregated proximal algorithm paper consider incremental aggregated proximal algorithm abbreviated iap form arg min fik delayed subgradient earlier iterate assume indexes satisfy boundedness condition intuitively idea term proximal minimization linear approximation term minus constant would used standard proximal algorithm arg min straightforward verify following equivalent form iap iteration arg min fik form algorithm executed process first use preceding subgradients compute via execute iteration starting note limitation iteration iteration shared incremental aggregated methods keep updating vector one store past subgradients similar iteration iap iteration equivalent form written used executing iteration typically obtain subgradient subsequent iap iterations example unconstrained case see possible prove various convergence results iap iteration equivalent forms case stepsize diminishing satisfies standard conditions results line similar results method given ias method given since difference iap ias methods iap place ias intuitively diminishing stepsize use asymptotic performance two methods similar indeed convergence proofs two methods fairly similar comparable assumptions thus convergence analysis incremental aggregated proximal algorithm unconstrained problems unconstrained case component functions differentiable iap iteration written case one may expect similar convergence behavior iap iag methods favorable conditions allow use constant stepsize particular prove following iap method proposition assume functions convex differentiable satisfy constants assume function strongly convex unique minimum denoted exists sequence generated iap iteration constant stepsize converges linearly sense kxk scalars proof given section follows closely one iag iteration relies similarity iterations use term place term key idea view iap iteration gradient method errors calculation gradient appropriately bound size errors similar known lines convergence proofs gradient subgradient methods errors proof section applies also diagonally scaled version iap separate constant stepsize used coordinate note line proof prop readily extend constrained case clear whether conditions linear convergence proved section however consider incremental aggregated proximal algorithm uses nonquadratic regularization term seems cope better case nonnegativity constraints finally return similarity iap method ias method note two methods admit similar distributed asynchronous implementations described paper context central processor executes proximal iteration selected component fik processors compute subgradients components points supplied central processor subgradients involve delay may unpredictable hence asynchronous character computation local versions proximal algorithms analysis paper requires convex straightforward way extend incremental proximal methods nonconvex problems involving twice differentiable functions describe briefly idea use local version proximal algorithm proposed author paper based local version fenchel duality framework given algorithm applies problem minimize subject twice continuously differentiable functions locally convex set defined terms assumptions relate second order sufficiency conditions nonlinear programming see local proximal algorithm form arg min sufficiently small ensure function minimized convex locally set version algorithm also given incremental version local proximal iteration problems involving sums functions particular consider problem minimize subject twice continuously differentiable functions locally convex set incremental local proximal iteration fik arg min index cost component iterated one may also consider aggregated form incremental iteration convergence properties algorithms interesting subject investigation lies however outside scope present paper also another way combine local proximal incremental ideas case nonconvex separable problem vector minimize def def subject twice continuously differentiable functions problem admits multiplier pair satisfying standard second order sufficiency conditions approach also developed problem converted equivalent problem minimize def min subject sufficiently small fixed suitably small neighborhood convex locally positive definite since minimization problem defines separable form minimize subject locally convex fixed suitably small values solved using augmented methods next section denoting optimal solution problem given shown prop see also prop differentiable thus gradient algorithm written equivalently using eqs local proximal form arg min kxi xik note minimization amenable decomposition including solution using incremental aggregated augmented lagrangian admm methods next section assuming sufficiently small induce required amount convexification make problem convex locally within neighborhood convergence properties algorithm developed based local theory conjugate functions fenchel duality developed refer papers discussion local aspects minimization well implementation newton iteration analogy gradient method analysis outside scope present paper interesting subject investigation incremental augmented lagrangian methods second objective paper consider application iap methods dual setting take form incremental augmented lagrangian algorithms separable constrained optimization problem minimize subject shown section convex functions positive integer may depend nonempty closed convex subsets given matrices given vectors optimization vector objective consider algorithms allow decomposition minimization augmented lagrangian separate augmented lagrangian minimizations performed respect single component note problem unaffected redefinition scalars long changed may beneficial adjust scalars residuals small near optimal may fact attempted course algorithms form heuristic following standard analysis dual function problem given inf dual vector decomposing minimization components expressed additive form concave function inf dual methods separable problems assuming dual function components true example compact dual function minimized classical subgradient method takes form obtained stepsize subgradients components updated according arg min additive form dual function makes suitable application incremental methods including ias method described section fact proposed separable problem mind case components differentiable true infimum definition attained uniquely one may also use iag method constant sufficiently small stepsize incremental aggregated version classical dual gradient method proposed often attributed everett takes form gradient dual function component given minimizer assumed unique differentiability streamlining computations using preceding relations see iteration following form case dual function maximized set done using incremental constraint projection methods involving projection proximal maximization single set time methods type proposed discussion beyond scope present paper incremental aggregated dual gradient iteration iadg select component index update single component according arg min hik aik keeping others unchanged yki update according convergence properties method governed known results iag method noted section particular obtain linear convergence constant sufficiently small stepsize assuming lipschitz continuity strong convexity frequency updating note however linear convergence result used primal problem additional convex inequality constraints corresponding dual problem involves nonnegativity constraints augmented algorithms separable problems nonincremental incremental subgradient gradient methods described convenient purposes decomposition convergence properties tend fragile hand stable augmented lagrangian methods major drawback quadratic penalty term added lagrangian function resulting augmented lagrangian separable amenable minimization decomposition limitation augmented lagrangian approach addressed number authors various algorithmic proposals survey first proposal type paper stephanopoulos westerberg based enforced decomposition minimizing augmented lagrangian separately respect component vector holding components fixed estimated values minimization components followed multiplier update using standard augmented lagrangian formula decomposition method attracted considerable attention motivated research including similarly structured methods tadjewski ruszczynski include convergence analyses give references earlier works incremental aggregated proximal algorithm bears similarity methods note however methods motivated nonconvex separable problems duality gap analysis requires convex programming structure duality gap method applied convex separable problems including linear programming another method convex separable problems uses augmented lagrangian minimizations given deng lai peng yin give several related references including paper chen teboulle method based use primal proximal terms augmented lagrangian addition quadratic penalty term spirit rockafellar proximal method multipliers involves two separate penalty parameters convergence satisfy certain restrictions papers hong luo robinson tappenden also propose algorithms use primal proximal terms two penalty parameters differ algorithm update primal variables rather jacobi fashion requiring additional assumptions see also dang lan related algorithm gaussseidel updating somewhat similar incremental mode iteration paper based results experiments appears beneficial different possibility deal nonconvex separable problems based convexification provided local proximal algorithm discussed end preceding section application nonconvex separable problems described see also tanikawa mukai proposed method aims improved efficiency relative approach discussion additional proposals decomposition methods use augmented lagrangians given recent paper hamdi mishra still another approach used exploit structure separable problem alternating direction method multipliers admm popular method convex programming first proposed glowinskii morocco gabay mercier developed gabay method applies problem minimize subject closed convex functions given matrix method better suited augmented lagrangian method exploiting special structures including separability capable decoupling vectors augmented lagrangian kax discussion properties many applications method refer extensive literature including books section section give many references form admm separable problems overcome coupling variables augmented lagrangian minimization first derived bertsekas tsitsiklis section see also section describe form specialized admm later section consider incremental proximal methods iap maximizing dual function taking account concavity components method takes form arg maxr qik index component chosen iteration positive parameter method given section shown implemented use decoupled augmented lagrangian minimizations involving single component vector iap method takes form arg maxr qik considered earlier within dual separable constrained optimization context section convergence results noted section apply method particular prop iap method convergent sufficiently small constant stepsize assuming differentiable lipschitz continuous gradient strongly concave course differentiability restrictive assumption amounts attainment minimum unique point definition describe incremental proximal methods iap implemented terms augmented lagrangian minimizations decompose respect components incremental character end review fenchel duality relation proximal augmented lagrangian iterations given first rockafellar subsequently many sources including author monograph textbook accounts chapter section duality proximal augmented lagrangian iterations given proper convex function let closed proper concave function defined inf later concave functions use terminology used convex functions applied conjugacy relation since conjugate convex function moreover closed recovered using conjugacy theorem sup conjugate convex function see prop key fact assuming closed proximal iteration arg maxr equivalently implemented two steps arg minr followed see section moreover subgradient relations shown straightforward application fenchel duality theorem maximization involves sum concave functions closedness used ensure duality relation holds guarantee minimum attained note form augmented lagrangian minimization relating somewhat contrived problem minimizing subject equality constraint augmented lagrangian method translate duality proximal augmented lagrangian iterations described constrained optimization context setting stage using duality incremental context consider generic convex programming problem form minimize subject convex function convex set matrix consider also corresponding primal dual functions inf inf convex concave respectively assume closed proper optimal value problem finite also closed proper concave duality gap see section relation primal dual functions particular equivalent form inf inf inf satisfy conjugacy relation based preceding discussion follows proximal iteration equivalently written process arg minr followed moreover eqs write iteration terms augmented lagrangian obtain classical first order augmented lagrangian method using definition primal function see minimization written inf inf inf kuk kay inf inf inf kay inf augmented lagrangian function kay preceding calculation also follows minimizes augmented lagrangian arg min iteration equivalently written multiplier iteration precisely first order augmented lagrangian method equivalent proximal iteration arg maxr view eqs also written form special subgradient given note minimizing need exist unique existence must assumed way assuming compact level sets example verified constraint problem minimizing subject dual optimal solution primal optimal solution problem augmented lagrangian algorithm generate sequences incremental augmented lagrangian methods duality proximal augmented lagrangian minimizations outlined generic holds related contexts based similar use fenchel duality theorem context separable problem holds incremental form replaced qik iteration replaced qik iap iteration refer two methods incremental augmented lagrangian method abbreviated ial incremental aggregated augmented lagrangian method abbreviated iaal based discussion algorithm ial method arg maxr qik implemented follows already noted section incremental augmented lagrangian iteration ial select component index update single component according arg min hik aik bik kaik bik keeping others unchanged yki update according aik bik method component indexes selected iteration equal frequency note augmented lagrangian minimization decoupled respect components thus overcoming major limitation augmented lagrangian approach separable problems derive iaal method use equivalent form iap algorithm see method similar form ial method except first translated multiple sum delayed subgradients particular iaal iteration takes form arg maxr qik applying relations follows write iaal iteration two steps select component index update single component according arg min hik aik bik kaik bik keeping others unchanged yki update according aik bik needed computation generated note subgradients thus streamlining preceding relations see iaal updates written arg min hik aik bik aik bik denote bik neglect constant term bik augmented lagrangian write iteration way depends scalars sum incremental aggregated augmented lagrangian iaal iteration select component index update single component according arg min hik aik aik keeping others unchanged yki update according comparing ial method iaal method see require comparable computations per iteration ial method requires diminishing stepsize convergence iaal method converge constant stepsize assuming dual function components lipschitz continuous gradients dual function strongly concave prop intuitively use constant stepsize iaal method asymptotically effective ial method course strongly convex example important case polyhedral arises integer programming analysis guarantees convergence iaal method stepsize diminishing case unclear ial iaal methods effective given problem ial iaal algorithms require initial multiplier regarding delayed indexes iaal algorithm iteration executed single processor appropriate choose iteration index component last changed prior current index component yet updated prior take let initial choice case formal statement iaal method given eqs replaced however different value may apply iteration executed distributed asynchronous computing environment corresponding ias method note multiplier updated time component updated suggests stepsize chosen carefully possibly experimentation moreover strong convexity assumption essential convergence method constant stepsize indeed example chen yuan used show iaal algorithm need converge value constant stepsize strong convexity assumption alternative possibility perform batch component updates form multiplier updates form example one may restructure iaal iteration consists full cycle updates sequentially according obtain update according note sequential update according amounts cycle coordinate descent iterations minimizing augmented lagrangian therefore variant iaal iteration may viewed implementation augmented lagrangian method approximate minimization augmented lagrangian using coordinate descent algorithm type may interesting suggested past see bertsekas tsitsiklis example eckstein linear convergence shown certain assumptions hong luo algorithm worthy investigation particularly view favorable computational results given wang hong luo let also note work hong chang wang razaviyayn luo derives algorithm separable problem quite similar iaal algorithm using different assumptions line development paper proves convergence linear convergence rate result comparison admm compare iaal iteration admm note connection admm augmented lagrangian methods clarified long ago series papers particular lions mercier proposed splitting algorithm finding zero sum two maximal monotone operators known algorithm turns algorithm paper entitled direct extension admm convex minimization problems considers algorithm special case admm convergence counterexample possible correct specialization admm separable problems dating unknown authors given shortly convergent broadly applicable conditions admm contains special case admm shown paper eckstein bertsekas showed general form proximal algorithm finding zero maximal monotone operator proposed rockafellar contains special case algorithm hence also admm thus admm augmented lagrangian method common ancestry special cases general form proximal algorithm finding zero maximal monotone operator common underlying structure two methods reflected similar formulas admm advantage flexibility allow decomposition expense typically slower practical convergence rate convenient form admm separable problem derived together corresponding coordinate descent version augmented lagrangian method section example see also section wang hong luo apparently unaware form admm give related algorithms referred algorithms paper however involve updating multiplier vectors place single multiplier update following algorithm iteration given admm algorithm generates follows admm iteration separable problems perform separate augmented lagrangian minimization arg min update according note contrary augmented lagrangian method best strategy adjusting usually clear see clear way adjust parameter improve performance admm result efficiency often determined trial error closely related refined form admm also derived section example aims improve parameter selection exploiting structure matrices uses parameter iteration place number submatrices nonzero jth row version multiplier update essentially involves diagonal scaling iteration maintains additional vectors zki represent estimates optimum following form aji denotes jth row matrix diagonally scaled admm iteration separable problems perform separate augmented lagrangian minimization zki arg min update according aji note preceding two admm iterations coincide nonzero row matrices comparing iaal iteration admm iterations note involve fairly similar operations particular admm mutiplier update approximates average full cycle components iaal multiplier updates executed times less frequently reminiscent difference proximal incremental proximal iterations different multiplier update frequencies iaal admm suggests assuming iaal converges stepsize chosen much smaller stepsize admm say crude approximation comparable performance also two major differences admm iterations guaranteed convergence constant stepsize weaker conditions differentiability strong convexity required hand iaal method requires diminishing stepsize general lipschitz continuity strong convexity constant stepsize arbitrary must sufficiently small iaal method single component updated iteration admm components updated problems may work favor iaal particularly large case generally seems favor incremental methods thus separable problems section one may roughly view iaal method incremental variant admm advantage incrementalism may offset less solid convergence properties computational comparison two methods helpful clarifying relative merits diagonally scaled admm iteration suggests also similar diagonal scaling iaal iteration simplest way accomplish use iaal method scaling constraints multiplying constraint equations different scaling factors turn introduce diagonal scaling dual variables proposition still apply form scaling assuming lipschitz continuity strong convexity comparison methods tadjewski ruszczynski methods motivated earlier algorithm apply separable constrained optimization problem section similar use different assumptions method requires differentiability second order sufficiency assumptions applies nonconvex separable problems may duality gap method applies separable problems convex possibly nondifferentiable cost function methods also similar iaal method use different approximations quadratic penalty terms particular instead vectors appear eqs use terms iteratively adjusted aim improve approximation quadratic penalty terms standard augmented lagrangian papers provide convergence analysis involving suitable choices various parameters although convergence results obtained strong ones admm major difference methods iaal method like admm update components simultaneously iteration incremental character proof proposition similar convergence proofs incremental gradient methods including one iag method follow proof prop based viewing iap iteration constant stepsize gradient method errors calculation gradient eqs deal delays iterates use following lemma due feyzmahdavian aytekin johansson also used convergence proof lemma let nonnegative sequence satisfying max max positive integer nonnegative scalars following proof take stepsize small needed various calculations valid also convenience expressing various formulas involving delays consider algorithm large enough iteration indexes delayed iteration indexes following calculations larger sufficient consider algorithm starting iteration note lipschitz condition implies lipschitz condition bound particular denoting special case unique minimum break proof prop steps first writing iteration gradient iteration errors carrying along errors standard line linear convergence analysis gradient methods without errors bounding errors finally using lemma write iteration gradient method errors error term given relate gradient error distance kxk verifying relation kxk kek done subtracting sides sides carrying straightforward calculation use bound according kek sufficiently small particular kek obtained preceding relation using inequality kxk holds sufficiently small consequence fact gradient lipschitz assumption gradient iteration error reduces distance see prop use strong convexity assumption coefficient strong convexity lipschitz condition invoke relation kxk see prop used bound term show kxk particular using relations kxk kxk kxk follows prove error proportional stepsize maximum distance iterates past iterates kek max straightforward using lipschitz assumption bound particular kek lik lik kxk kxk moreover eqs using relation range obtain kek generically use denote function scalar bounded open interval containing origin thus kek also since range lies range follows max constant independent combining obtain use eqs obtain kxk max particular two terms bounding kek view bounded terms times respectively use lemma kxk sufficiently small shows kxk converges linearly completes proof convergence rate comparison small stepsizes note provides refined rate convergence estimate estimate precise second order term right shows ratio coefficient strong convexity plays important role particular convergence rate improved condition number small role ratio determining convergence rate gradient methods without error see convergence rate estimates like one also similarly derived iag shown standard nonincremental gradient method error term equal estimates first order neglecting second order term side identical iap iag standard nonincremental gradient method suggests small values iap iag perform comparably nonincremental gradient method performs much worse requires times much overhead per iteration calculate full gradient cost function nonquadratic incremental proximal augmented lagrangian methods augmented lagrangian methods section apply linear equality constrained problems multiplier vector unconstrained allows application linear convergence result prop consider convex inequality constraints whose multipliers must nonnegative result dual problem involves orthant constraint linear convergence result prop apply unfortunately orthant constraint instead proof prop breaks critical inequality fails fact knowledge linear convergence rate result iag method applied orthant constraint currently available moreover convergence augmented methods discussed section analyzed case section try address difficulty using different nonquadratic proximal approach particular introduce incremental augmented lagrangian methods convex inequality constraints quadratic penalty augmented lagrangian replaced suitable nonquadratic penalty one objectives develop linearly convergent methods exploit separability similar ones section second objective develop corresponding dual linearly convergent incremental aggregated gradient proximal methods differentiable minimization subject nonnegativity constraints nonquadratic augmented lagrangian methods inequality constraints consider convex programming problem minimize subject convex functiona convex set corresponding dual problem maximize subject concave function multiplier vector given inf apply augmented lagrangian method first proposed kort bertsekas developed number subsequent works including monograph chapter method makes use nonquadratic penalty function following properties twice differentiable iii common interesting special case exponential exp corresponding exponential augmented lagrangian method dual proximal algorithm known entropy minimization algorithm analyzed first tseng bertsekas related classes methods also contain exponential entropy methods special cases proposed analyzed later iusem svaiter teboulle see also survey iusem contains followup work many references augmented lagrangian algorithm corresponding problem maintains multipliers inequality constraints consists finding arg min penalty parameters followed multiplier iteration ajk alternatively equivalently based fenchel duality theorem one may show multiplier iteration written proximal form arg maxr dual function given convex conjugate see equivalence expressions let write note augmented lagrangian minimization yields arg min primal function inf minimization fenchel dual maximization applying fenchel duality theorem maximizing vector equal gradient given formula note dual problem maximize subject proximal maximization unconstrained reason conjugate takes value outside nonnegative orthant character barrier function within nonnegative orthant example exponential function conjugate entropy function important advantage nonquadratic augmented lagrangian method versus quadratic counterpart leads twice differentiable augmented lagrangians advantage also carries incremental augmented lagrangian methods presented next nonquadratic incremental augmented lagrangian methods inequality constraints consider separable constrained optimization problem minimize subject gji gji convex functions convex sets similar development section corresponding incremental aggregated augmented lagrangian method parallels iaal maintains vector operates follows incremental aggregated augmented lagrangian iteration inequalities iaali select component index update single component according arg min hik jik keeping others unchanged yki update according gji note minimization low dimension involves nonquadratic penalty function thus even component minimization likely require form iterative line search note also update formula equivalently written arg maxr qik dual function components given inf gij form method viewed incremental aggregated proximal method maximizing gji inf convergence properties iaali corresponding incremental aggregated proximal method solving dual problem maximize subject interesting research subjects discuss nonquadratic incremental aggregated proximal algorithm nonnegativity constraints consider minimization problem minimize def subject convex functions translated minimization context algorithm maintains vector updated follows nonquadratic incremental aggregated proximal iteration select component index obtain arg minn fik xjk xjk analysis convergence properties algorithm beyond scope paper subject separate publication particular interesting investigate linear convergence method parameters ajk constant sufficiently small appropriate lipschitz continuity strong convexity assumptions similar prop note differentiating cost function minimization obtain optimality condition written expression may used line proof section place corresponding formula unconstrained iap algorithm written form hence also quadratic two preceding formulas coincide however contrary iteration iteration preserves strict positivity iterates addresses problem incremental aggregated proximal algorithm nonnegativity constraints illustration algorithm consider special case exponential function entropy function exp exist eqs using constant stepsize coordinate takes form xjk component index selected iteration write iteration xjk enk error vector played important role proof prop use line analysis section speculate linear convergence iteration equivalent form assume minimum satisfies strict complementary slackness condition speculate behavior small neighborhood around consider first iterates xjk small neighborhood around note errors ejk logarithms sequences xjk near negligible relative gradient components view form iteration condition negative hence ratios within linearly decreasing towards consider next iterates xjk small neighborhood around close corresponding positive numbers iterated according xjk looks like incremental aggregated gradient iteration logarithms indeed making transformation variables introducing function exp exp gradient related gradient relation exp exp exp see iteration written zkj xjk ejk xjk exp zkj thus neglecting effect coordinates fast diminishing iteration behaves like iap method restricted space coordinate logarithms stepsizes near close positive constants combining preceding argument proof prop show method converges locally started sufficiently close assuming strict complementarity condition appropriate stepsize lipschitz continuity strong convexity conditions proof long deferred future publication moreover xjk converges linearly xjk also converges linearly however sophisticated argument needed show global linear convergence combining line proof prop existing convergence proofs entropy minimization algorithm dual exponential method multipliers incremental aggregated gradient algorithm nonnegativity constraints finally let note analog iag method nonnegativity constraints analogy form xjk equivalently xjk exp difference use iteration place compared ias method case functions differentiable stepise constant denotes projection onto nonnegative orthant may view method constrained version iag method constant stepsize however linear convergence proof presently iteration may also viewed incremental version mirror descent method see beck teboulle surveys juditsky nemirovski references quoted author presentation section using similar arguments case iteration show iteration converges linearly started sufficiently close assuming strict complementarity condition appropriate constant stepsize conditions note iteration may implemented conveniently proximal iteration require proximal minimization however iteration suitable basis development incremental augmented lagrangian method iaali eqs local linear convergence result constrained iag method possible assuming strict complementarity condition particular shown sphere centered belongs sphere sequence generated iteration stays within sphere converges linearly idea proof first iteration iterates satisfy xjk indices method essentially reduces iag method space variables final comment relates choice stepsizes iteration coordinates bounded away asymptotically taylor expansion exponential obtain discarding second higher order terms see approximately xjk xjk suggests scaling stepsizes inversely proportional optimal value hand makes sense choose large subject positive lower bound order accelerate convergence xjk thus reasonable heuristic set max estimate optimal coordinate value positive scalar corresponds stepsize constrained iag iteration small positive constant one may also consider updating values course algorithm better estimates obtained concluding remarks paper proposed iap incremental aggregated proximal method shown favorable assumptions attains linear convergence rate using constant sufficiently small stepsize application method dual context separable constrained optimization problems yields iaal method incremental augmented lagrangian method preserves exploits separable structure principal difference method relative several alternative augmented proposals incremental character high update frequency multiplier alternative methods except algorithm one including proper version admm separable problems update primal variables simultaneously rather sequentially incremental nature moreover alternative methods update multipliers times less frequently iaal systematic computational comparison methods nonincremental alternatives helpful clarifying advantages incremental approach may hold several analytical issues relating iaal method require investigation example refined convergence rate analysis may point way adaptive stepsize adjustment schemes forms scaling based second derivatives cost function matrices analyses type admm see paper giselsson boyd references cited another possibility use momentum term updating formula multiplier third possibility control degree incrementalism batching multiple augmented lagrangian iterations involving multiple components also proposed linearly converging extensions iaal problems convex inequality constraints based nonquadratic augmented lagrangian approach exponential dual version incremental aggregated entropy algorithm fuller investigation method well method exponential analog iag method nonnegativity constraints important subjects investigation references ahn fessler blatt hero convergent incremental optimization transfer algorithms application tomography ieee transactions medical imaging vol bragin luh yan stern convergence surrogate lagrangian relaxation method optimization theory applications vol bertsekas ozdaglar convex analysis optimization athena scientific belmont boyd parikh chu peleato eckstein distributed optimization statistical learning via alternating direction method multipliers publishers inc boston bertsekas tsitsiklis parallel distributed computation numerical methods englewood cliffs beck teboulle mirror descent nonlinear projected subgradient methods convex optimization operations research letters vol bertsekas local convex conjugacy fenchel duality preprints triennial world congress ifac helsinki finland vol bertsekas convexification procedures decomposition methods nonconvex optimization problems optimization theory applications vol bertsekas constrained optimization lagrange multiplier methods academic press republished athena scientific belmont line http bertsekas convex optimization theory athena scientific belmont bertsekas incremental gradient subgradient proximal methods convex optimization survey lab information decision systems report mit bertsekas incremental proximal methods large scale convex optimization math programming vol bertsekas convex optimization algorithms athena scientific belmont chen yuan direct extension admm convex minimization problems necessarily convergent mathematical programming published line chen teboulle decomposition method convex minimization problems mathematical programming vol deng lai peng yin parallel admm convergence arxiv preprint dang lan randomized methods saddle point optimization arxiv preprint eckstein bertsekas splitting method proximal point algorithm maximal monotone operators math programming vol eckstein augmented lagrangian alternating direction methods convex optimization tutorial illustrative computational results rutcor research report rrr rutgers univ everett generalized lagrange multiplier method solving problems optimal allocation resources operations research vol feyzmahdavian aytekin johansson delayed proximal gradient method linear convergence rate prop ieee international workshop machine learning signal processing mlsp gurbuzbalaban ozdaglar parrilo convergence rate incremental aggregated gradient algorithms arxiv preprint gabay mercier dual algorithm solution nonlinear variational problems via approximations comp math vol gabay methodes numeriques pour optimization non lineaire doctorat etat sciences mathematiques uni pierre marie curie paris gabay applications method multipliers variational inequalities fortin glowinski augmented lagrangian methods applications solution problems amsterdam giselsson boyd metric selection splitting admm arxiv preprint glowinski marrocco sur approximation par elements finis ordre resolution par une classe problemes dirichlet non lineaires revue francaise automatique informatique recherche operationnelle analyse numerique hong chang wang razaviyayn luo block successive upper bound minimization method multipliers linearly constrained convex optimization arxiv preprint hamdi mishra decomposition methods based augmented lagrangians survey topics nonconvex optimization springer hong luo linear convergence alternating direction method multipliers arxiv preprint iusem svaiter teboulle proximal methods convex programming math operations research vol iusem augmented lagrangian methods proximal point methods convex minimization investigacion operativa vol juditsky nemirovski first order methods nonsmooth convex optimization general purpose methods optimization machine learning sra nowozin wright eds mit press cambridge juditsky nemirovski first order methods nonsmooth convex optimization utilizing problem structure optimization machine learning sra nowozin wright eds mit press cambridge kort bertsekas new penalty function method constrained minimization proc ieee confer decision control new orleans lions mercier splitting algorithms sum two nonlinear operators siam numerical analysis vol mairal optimization surrogate functions arxiv preprint mairal incremental optimization application machine learning arxiv preprint martinet regularisation variationelles par approximations successives revue fran automatique infomatique rech vol bertsekas borkar distributed asynchronous incremental subgradient methods proc haifa workshop inherently parallel algorithms feasibility optimization applications butnariu censor reich elsevier amsterdam bertsekas incremental subgradient methods nondifferentiable optimization siam optimization vol bertsekas effect deterministic noise subgradient methods math programming ser vol random algorithms convex minimization problems math programming ser vol nesterov introductory lectures convex optimization kluwer academic publisher dordrecht netherlands robinson tappenden flexible admm algorithm big data applications arxiv preprint rockafellar dual approach solving nonlinear programming problems unconstrained optimization math programming rockafellar monotone operators proximal point algorithm siam control optimization vol rockafellar augmented lagrangians applications proximal point algorithm convex programming math operations research vol ruszczynski convergence augmented lagrangian decomposition method sparse convex optimization math operations research vol schmidt roux bach minimizing finite sums stochastic average gradient arxiv preprint tanikawa mukai new technique nonconvex decomposition separable optimization problem ieee trans autom control vol tatjewski new decomposition algorithm nonconvex separable optimization problems automatica vol tseng bertsekas convergence exponential multiplier method convex programming math programming vol wang hong luo solving separable convex minimization problems using alternating direction method multipliers arxiv preprint wang bertsekas incremental constraint methods nonsmooth convex optimization lab information decision systems report mit appear siam optimization wang bertsekas incremental constraint projection methods variational inequalities mathematical programming vol
3
nov effective invariant theory permutation groups using representation theory nicolas borie abstract using theory representations symmetric group propose algorithm compute invariant ring permutation group approach goal reduce amount linear algebra computations exploit thinner combinatorial description invariant ring computational invariant theory representation theory permutation group drafty old version full corrected text available http introduction invariant theory rich central area algebra ever since eighteenth theory practical applications resolution polynomial systems symmetries see effective galois theory see discrete mathematics see original motivation second author literature contains deep explicit results special classes groups like complex reflection groups classical reductive groups well general results applicable group given level generality one hope results simultaneously explicit tight general thus subject effective early given group one wants calculate properties invariant ring impulsion modern computer algebra computational methods implementations largely expanded last twenty years however much progress still needed beyond toy examples enlarge spectrum applications classical approaches solving problem computing invariant ring use elimination techniques vector spaces high dimensions basis become impracticable number variables goes around modern computers evaluation approach proposed author required permutation group whose index symmetric group relatively controlled around modern computers approaches localize algebra reduction vector spaces still large dimensions basis approaches works monomials degree variables linear reduction space spanned monomials costly evaluation approach proposed author thesis linear algebra free module spanned cosets symmetric group permutation group index dimension case linear reduction globally cost cube dimension space one hope much classical approaches even progress computer propose article approach following idea adding combinatorics invariant theory help produce efficient algorithms whose outputs could perhaps reveal combinatorics also long time goal nicolas borie combinatorial description invariant ring generators couple primary secondary invariants families since hilbert problem solve restrictive special cases example give secondary invariants young subgroups symmetric groups focus problem computing secondary invariants finite permutation groups non modular case assuming shows localize computations inside selected irreducible representations symmetric group spaces smaller ones used classical approaches largely take advantages combinatorial results coming theory representations symmetric group invariant ring representations symmetric group set denote cardinality set invariant ring permutation group application combinatorics approach start result one key article invariant theory written stanley proposition mixing invariant finite group combinatorics recall general result proposition let homogeneous set parameters finite subgroup order set deg action quotient ring isomorphic times regular representation applying result symmetric group degree elementary symmetric polynomial recover well known result ring isomorphic regular representation symmetric group well known quotient called coinvariant ring symmetric group algebraic combinatorics world several basis ring explicitly built harmonic polynomials schubert polynomials descents monomials let group permutations subgroup know reapply result stanley homogeneous set parameters formed elementary symmetric polynomials ring coinvariant symmetric group also isomorphic time regular representation group know permutation group non modular case ring invariant action algebra imply exist family generator making ring invariant action free module rank ring symmetric polynomials effective invariant theory using representation theory taking quotient side ideal keeping representative equivalent class quotient definition subspace action trivial result stanley imply particular polynomials span subspace coinvariant symmetric group action trivial way construct thus search point inside ring coinvariant symmetric group could done irreducible representation irreducible representation theory representations symmetric group largely studied bring formulate following problem problem let positive integer permutation group subgroup construct explicit basis trivial representations appearing irreducible subrepresentation inside quotient first step solve problem constitute basis coinvariant symmetric group respecting action partitioned irreducible representations expose basis next section representations symmetric group recall section results describing irreducible representations symmetric group positive integer call partition denoted non increasing sequence integers whose entries sum sage sage sage partitions partitions integer partitions positive integer irreducible representations symmetric group indexed partitions since finite group multiplicity irreducible representation inside regular representation equal dimension information collected studying standard tableaux let positive integer partition tableau shape diagram square boxes disposed raw first raw contains boxes top second raw contains boxes standard tableau shape filled tableau shape integer integers increasing column raw denote set standard tableaux shape ask sage display object nicolas borie sage standardtableaux sage latex also iterate generate tableaux given shape sage standardtableaux sage latex standard tableaux shape number standard tableaux given shape easily computed using formula formula standard tableaux shape constitute basis indexing vector space associated irreducible representation symmetric group indexed representation must multiplicity dimension inside regular representation following computation illustrate equality check formula well implemented sage sage sage def dim partitions dim dim standardtableaux return dim range print know describe last useful object algorithmic come gather information irreducible representations symmetric group character table recall character representation map associate trace matrices group element map constant conjugacy classes conjugacy classes symmetric group indexed also partitions permutation single disjoint cycles representation belong effective invariant theory using representation theory conjugacy class indexed partition disjoint cycles representation contains cycles size respectively character table symmetric group gather square matrices value characters irreducible representations conjugacy classes symmetric group sage symmetricgroup sage symmetric group order permutation group sage permutation higher specht polynomials symmetric group algorithmic invariant theory must point construct invariant polynomials current approaches use reynolds operator orbit sum group special monomial group become large invariant become large even stored sparse manner inside computer number terms easily fit permutation group small index approach focuses combinatorics quotient higher specht polynomials constitute perfect family get explicit answer problem quotient isomorphic regular representation several copies irreducible representation following dimension specht polynomials associated standard tableaux allows construct explicit subspace isomorphic irreducible representation symmetric group partition span specht polynomials associated standard tableaux shape realize explicitly irreducible representation indexed see higher specht polynomial take care multiplicities irreducible representation inside coinvariant indexed pair standard tableaux shape constitute basis among known basis coinvariants symmetric group harmonic schubert monomials staircase descents monomials higher specht polynomial constitute basis partitioned irreducible construction let partition two standard tableaux shape define word reading tableau top bottom consecutive columns starting left number word index recursively number word index index lies left word index otherwise example nicolas borie two tableaux reading tableau give placing step step indices get initialization right lef right lef filling index corresponding cell tableau obtain index tableau using tableaux cells giving variable index correi sponding cell giving exponent build monomials follow monomials three variables standard tableaux shape let denote row stabilizer column stabilizer respectively consider young symmetrizer sign element group algebra know define polynomial fts fts effective invariant theory using representation theory theorem let positive integer family polynomials fts running standard tableaux shape form basis sym module terasoma yamada proved using usual bilinear form context divided difference associated longest element symmetric group three variable basis sym fts know try solve problem searching linear combination higher specht polynomials stabilized action permutation group combinatorial description invariant ring try slice invariant ring finer degree degree irreducible representations symmetric group homogeneous build format series mixing degree statistic partitions refinement moliens series let permutation group module also thus representation also representation usually irreducible representation stay irreducible restricted searching trivial representations inside irreducible representation done scalar product character permutation group denote set conjugacy classes usual scalar product two characters given chosen arbitrary proposition let partition positive integer let permutation group multiplicity trivial representation inside irreducible representation indexed given cycle type chosen arbitrary nicolas borie cycletype coefficient character table indexed partitions cycle type proof consist using usual scalar product characters trivial character thus remark value characters read character table conjugacy classes subset conjugacy classes traces matrices change representation viewed representation definition let permutation group using formal set variable indexed partitions define trivial multiplicities enumerator follow count multiplicities trivial representation inside irreducible representations symmetric group degree indexed partitions integer sage permutationgroup permutation group generators sage card permutation print card sage group thus definition let partition positive integer formal variable denote representation appearance polynomial defined follow cocharge sum run standard tableaux shape make link degree irreducible representations isomorphic abstract one indexed appearing inside quotient give multiplicities irreducible representation indexed generally coefficient integer term means sym isomorphic irreducible representation indexed built inside graded quotient degree higher specht polynomials realize explicitly representations effective invariant theory using representation theory cocharge exactly sum entries tableau degree corresponding specht proposition let permutation group trivial multiplicities enumerator hilbert series related proof result consequence statements combinatorics standard tableaux discussed previously dimension gtrivial space inside irreducible representation remains know degree copies irreducible representation lies quotient cocharge standard tableaux right statistic partitioning occurrences along degree back example sage permutationgroup permutation group generators sage sage symmetricgroup sage symmetric group order permutation group sage side let list standard tableaux shape cocharge injecting evaluations inside trivial multiplicities enumerator recover nicolas borie give number secondary invariants degree degree elementary symmetric polynomials taken primary invariants polynomials also quotient two hilbert series computed gap using molien formula secondary invariants built higher specht polynomial let permutation group partition let suppose calculated want build explicitly secondary invariant polynomials case homogeneous space inside want construct finite known number independent invariant polynomials action usual way dealt problem built explicit family spanning concerned space generating polynomials forming basis basis element basis element apply reynolds operator linear algebra get free family wanted dimension knowing dimension give stopping criteria often important since computations extremely heavy even small number variables context even usual approach would work permutation often given list generators even forget reynolds operator proposition let permutation group given generator let partition abstract space inside abstract representation indexed given intersection eigenspace representation matrices associated eigenvalue proof view representation indexed formal free module generated standard tableaux shape know subspace dimension elements invariant action invariant equivalent stabilized reynolds operator also equivalent definition fact stabilized action generators since working inside representation permutation associated matrix kernel matrix characterize formal subspace stabilized algorithm building secondary invariants present effective algorithm exploiting approach using representation symmetric group computation dependencies character table symmetric group conjugacy classes group cardinalities representatives matrices irreducible representation symmetric group linear algebra returned set composed linear combinations higher specht polynomials polynomials easily evaluated contains lot vandermonde factors expansion set formal variables heavy computation effective invariant theory using representation theory compute secondary invariant using representations input set permutations size generating group def secondaryinvariants ermutationgroup symmetricgroup artition ectorspace kernel abstract secondary basis standardt ableaux abstract secondary higherspecthp olynomials return large trace algorithm teen years computational challenge consist computing generating family ring invariants group acting edges graphs nodes group subgroup symmetric group degree cardinality far know computer algebra system already handle computation hours computation magma singular evaluation approach written sage finish sage precisely around percent computation linear algebra done hours tried approach group got following verbose trace read following pattern partition ambient dimension number standard tableaux shape rank repr dimension space sage load sage transitivegroup sage ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr nicolas borie ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr ambient dimension rank repr total total cpu time algorithm took seconds generated secondary invariants linear combinations higher specht polynomials still believe computation couple primary secondary invariants group unreachable magma singular evaluation approach less hours effective invariant theory using representation theory example inside symmetric representation associated partition algorithm making gauss reduction build space dimension inside ambient space dimension using formula check exist standard tableaux shape implementation details approach involves lot technology around representation theory symbolic computation required prerequisites available current computer algebras system problem dependencies implemented gap sage implemented first version sage lines code tests documentation test efficienty approach main steps computation compute character table symmetric group task done gap rule example working algorithm small part computations also precomputed store small symmetric group degree enumerate cardinality representative conjugacy class also handled gap interfaced sage general problem enumerating conjugacy classes finite group simple efficient algorithms permutation group compute trivial multiplicities enumerator computed short function iterate partitions compute simple scalar product see formula calculate matrices inside irreducible representations identified space inside irreducible build matrices generators using sage object symmetricgrouprepresentation admits argument partition returned object able build matrices abstract representation permutation given line notation compute intersection stabilized subspace matrices use loop method intersection sage vectorspace method kernel sage matrix class linear algebra thus handle sage transcript abstract combinations standard tableaux term higher specht polynomials sage especially contains combinatorial family standard tableaux lot combinatorial statistic availlable objects like cocharge implemented higher specht polynomials sage polynomials computed pair sage standardtableaux shape code completely optimized since computations invariants minimal generating set secondary invariants hironaka decomposition present nicolas borie large theoretical exponential complexity believe code source refinement need new algorithms better asymptotically behavior however huge factor execution time easily wined code cache remember better interface gap sage parralelisation computer core echelonize matrices special irreducible representation wanted benchmarks approach using current available open source technology therefore anyone even student reproduce experience rapidly free complexity rich literature effective invariant theory provide lot fin complexity bound algorithms basis admit general complexity bounds worst case variables appears overestimated compared effective behavior thesis author present evaluation approach compute invariants inside quotient reduced dimension algorithm computing secondary technique complexity using representation symmetric group still hard establish fin bound however produce better bounds theorem let permutation group subgroup given generators complexity linear algebra reduction algorithm computing secondary invariants section bounded number standard tableaux shape number partition size proof straightforward counting reductions matrices permutation irreducible representation symmetric group partition construct free family vectors rank inside space dimension number indeterminates equations vertical concatenation matrices summing operations give announced bound corollary let permutation group subgroup given generators complexity algorithm computing secondary invariants section complexity proof let denote fmax max thus fmax since worse irreducible representation composed element stabilized point wise fmax term sum using formula get bound fmax using triangular inequality roughly fmax give result effective invariant theory using representation theory developments even impressive algorithmic efficiently approach using representation symmetric group also advantage putting lot combinatrics inside problem often classed inside effective algebraic geometry acknowledgements research driven computer exploration using mathematical software sage particular perused algebraic combinatorics features developed community well group theoretical features provided gap references abdeljaouad des invariants applications galois effective phd thesis paris bergeron algebraic combinatorics coinvariant spaces cms treatises mathematics borie calcul des invariants des groupes permutations par fourier phd thesis laboratoire paris sud colin solving system algebraic equations symmetries pure appl algebra algorithms algebra eindhoven colin des invariants effective applications galois implantation axiom phd thesis polytechnique derksen kemper computational invariant theory berlin rahmany solving systems polynomial equations symmetries using bases proceedings international symposium symbolic algebraic computation pages gap group lehrstuhl mathematik rwth aachen germany smcs andrews scotland gap groups algorithms programming garsia stanton group actions rings invariants permutation groups advances mathematics gatermann symbolic solution polynomial equation systems symmetry informationstechnik berlin geissler galois group computation rational polynomials symbolic algorithmic methods galois theory kemper invar package calculating rings invariants iwr preprint university heidelberg king fast computation secondary invariants arxiv preprint king minimal generating sets invariant rings finite groups arxiv preprint lascoux schubert acad sci paris pouzet invariants graphes reconstruction acad sci paris community enhancing sage toolbox computer exploration algebraic combinatorics stanley invariants finite groups applications combinatorics bull amer math soc stein sage mathematics software version sage development team http sturmfels algorithms invariant theory vienna terasoma yamada higher specht polynomials symmetric group proc japan acad ser math algebraic invariants graphs study based computer exploration sigsam bulletin nicolas borie computing minimal generating sets invariant rings permutation groups basis discrete models paris pages electronic univ paris est laboratoire informatique gaspard monge descartes copernic descartes champs sur marne cedex france
0
deep domain adaptation peng ziyan jan ernst siemens corporate technology princeton usa nov abstract existing methods domain adaptation work assumption training data given however assumption violated often ignored prior works tackle issue propose deep domain adaptation zdda uses privileged information pairs zdda learns sourcedomain representation tailored task interest toi also close representation therefore toi solution classifier classification tasks jointly trained representation applicable source target representations using mnist nist emnist sun datasets show zdda perform classification tasks without access targetdomain training data also extend zdda perform sensor fusion sun scene classification task simulating representations data best knowledge zdda first sensor fusion method needs data figure propose deep domain adaptation zdda domain adaptation sensor fusion zdda learns pairs targetdomain training data unavailable example domain adaptation task mnist pairs dataset dataset colored version fashionmnist dataset details sec goal task derive solution toi source target domains methods proposed solve tasks assumption data data directly applicable related toi regardless whether labeled target domain available training time always true practice instance real business use cases acquiring taskrelevant training data infeasible due combination following reasons unsuitable tools field product development timeline budget limitation data regulations impractical assumption also assumed true existing works sensor fusion goal obtain source target toi solution robust noise either domain unsolved issue motivates propose deep domain introduction useful information solve practical tasks often exists different domains captured various sensors domain either modality dataset instance layout room either captured depth sensor inferred rgb images scenarios highly likely access limited amount data certain domain performance solution classifier classification tasks learn one domain often degrades solution applied domains caused domain shift typical domain adaptation task training data targetdomain training data task interest toi given tation zdda sensor fusion approach learns training pairs without using training data use term data refer data illustrate zdda designed achieve figure using example task mnist recommend readers view figures tables color figure source target domains gray scale rgb images respectively toi digit classification mnist testing data assume training data unavailable example zdda aims using mnist training data pairs fashionmnist dataset dataset colored version dataset details sec train digit classifiers mnist images specifically zdda achieves simulating rgb representation using gray scale image building joint network supervision toi gray scale domain present details zdda sec make following contributions best knowledge proposed method zdda first deep learning based method performing domain adaptation different image modalities instead different datasets modality office dataset without using training data show zdda efficacy using mnist nist emnist sun datasets cross validation given training data show zdda perform sensor fusion zdda robust noisy testing data either source target domains compared naive fusion approach scene classification task sun dataset unavailable reality contrast propose zdda learn pairs without using training data one part zdda includes simulating representation using data similar concepts mentioned however require access dualdomain training pairs zdda needs access data name domain adaptation already used blitzer yang however methods require additional prerequisites blitzer method needs access data requiring domain descriptor always observed accurate yang method shows efficacy using different datasets modality specifically office dataset instead data different modalities zdda requires none two prerequisites show efficacy zdda data different modalities terms sensor fusion ngiam define three components multimodal learning multimodal fusion cross modality learning shared representation learning based modality used feature learning supervised training testing experiment audiovideo data proposed deep belief network autoencoder based method targeting temporal data yang follow setup multimodal learning validate proposed architecture using data although certain progress sensor fusion achieved previous works unaware existing sensor fusion method overcomes issue lacking training data issue zdda designed solve proposed method zdda given task interest toi source domain target domain proposed method deep domain adaptation zdda designed achieve following two goals domain adaptation derive solutions toi taskrelevant training data unavailable assume access labeled training data pairs sensor fusion given previous assumption derive solution toi testing data available testing data either noisy assume prior knowledge available type noise domain gives noisy data testing time convenience use scene classification task example toi explain zdda zdda related work domain adaptation extensively studied computer vision applied various applications image classification semantic segmentation image captioning advance deep neural networks recent years methods successfully perform fully partially labeled unlabeled taskrelevant data although different strategies domain adversarial loss domain confusion loss proposed improve performance tasks existing methods need training data figure overview zdda training procedure use images sun dataset illustration zdda simulates representation using data builds joint network supervision source domain trains sensor fusion network step choose train fix also train fix simulate representation step also trainable instead fixed choose fix make number trainable parameters manageable details explained sec applied example depth rgb images respectively according previous assumption access taskrelevant labeled depth data pairs training time training procedure zdda illustrated figure simulate rgb representation using depth image build joint network supervision toi depth images train sensor fusion network step step step respectively use marked bottom convolutional neural networks cnn figure refer cnn step create two cnns take depth rgb images pairs input purpose step find feeding rgb image approximated feeding corresponding depth image achieve fixing enforcing loss top training time choose train fix training fixing also achieve purpose loss replaced suitable loss functions encourage similarity two input representations selection inspired design step similar hallucination architecture supervision transfer require taskrelevant training pairs instead use training pairs step add another cnn network architecture classifier network shown step learn label training depth images classifier experiment fully connected layer simplicity types classifiers also used newly added cnn takes depth images input shares weights original source cnn use refer step training time fix choice fixing inspired adversarial adaptation step adda also trainable step given limited amount data choose fix make number trainable parameters manageable source classifier trained weighted sum softmax loss loss minimized softmax loss replaced losses suitable toi step expect obtain depth representation close rgb representation feature space performs reasonably well trained classifier scene classification step step done one step properly designed curriculum learning separate clarity also difficulty designing learning curriculum training step form scene classifier denoted concatenating trained source classifier shown figure meets first goal domain adaptation use notation refer method using training procedure figure step testing procedure figure perform sensor fusion propose step train joint classifier input using depth training data create two cnns network architecture add concatenation layer top form three scene classifiers rgb depth domains one classifier per domain trained rgbd classifier expected able handle noisy input reasonable performance degradation use notation refer method using training procedure figure step testing procedure figure experiment setup testing data datasets domain adaptation validate efficacy zdda classification tasks using mnist nist emnist sun datasets sensor fusion experiment sun dataset summarize statistics datasets table list dataset ids use refer datasets create colored version datasets according procedure proposed ganin work blending gray scale images patches randomly extracted dataset colored datasets original ones used construct four tasks adapting gray scale rgb images task use one three pairs datasets original colored ones data example task together one possible choice data task acknowledged one standard experiments test efficacy methods recent works adopt experiment extend contains pairs belonging different scenes pair raw noisy depth image clean depth image provided choose use raw depth image simulate scenarios scenes select following scenes computer room conference room corridor dining room discussion area home office idk lab lecture theatre study space number scene scene use refer scene rgbd pairs belonging scenes used taskirrelevant training data scenes selected based following two constraints scene contains least pairs ensures reasonable amount data total number pairs belonging selected scenes minimized maximizes amount training data empirically find amount diversity training data important zdda avoid bias toward scene testing sensor fusion figure overview zdda testing procedure use sun images illustration concatenate output representations concatenated representation connected joint classifier training time respectively fix take depth images input train robust scene classifier randomly select inputs optionally add noise independently supervise entire network label depth training data scene classification done softmax loss enforced top joint classifier according step output expected simulate rgb representation feed taskrelevant rgb image expectation based assumption relationship pairwise data similar regardless whether data given simulated rgb representation trained learn depth representation suitable scene classification without constraint loss step testing time replaced takes rgb testing images input optional noise added test zdda performance given noisy testing data shown figure figure also test replacing rgb images depth images evaluate performance zdda step given testing depth images training procedure figure original dataset dataset classification subject mnist digit clothing nist letter emnist letter sun scene original image size classes training images testing images class labels balanced class example images dress coat etc see sec details see sec details corridor lab etc colored dataset example images table statistics datasets use nist use class dataset remove digits treat uppercase lowercase letters different classes emnist use emnist letters split contains letters create colored datasets original ones using ganin method see sec details refer dataset corresponding dataset refer nist datasets respectively wards example using taskirrelevant data task train cnn denoted nref lenet architecture scratch using images labels target cnns figure nref follow similar procedures tasks taskirrelevant datasets involving experiment involving mostly use googlenet bna also use alexnet squeezenet cross validation experiment respect different bnas since limited amount pairs available cnns figure bvlc googlenet model bvlc alexnet model reference squeezenet model bna googlenet alexnet squeezenet respectively models trained imagenet classification task optionally added noise experiment data noisy data latter case given prior knowledge noise available use black image noisy image model extreme case information noisy image available train step augmented training data formed copying original training data times replacing ptrain images selected randomly black images follow procedure twice independently use two augmented training datasets inputs two source cnns step empirically set ptrain testing data figure constructed replacing ptest original testing images selected randomly black images evaluate zdda different ptest experiments number base network cnn architecture architecture bna bna inclusive lenet googlenet alexnet squeezenet table base network architecture bna use experiments bna specify layer separating cnn source classifier figure layer name right column based official caffe squeezenet implementation bna data selected scenes randomly select pairs data experimenting different scene classification tasks using different selections scenes use data associated selected scenes data training details use caffe implement zdda table lists base network architecture bna use layer separating cnn source classifier figure instance case bna lenet architecture cnn figure lenet architecture layer rest lenet architecture used source classifier tasks involving use lenet bna train cnns figure scratch except target cnn dataset fixed data none source target table overall average per class accuracy domain adaptation tasks gray scale images rgb images formed datasets table introduce dataset ids use refer datasets middle four rows show performance color cell reflects performance ranking column darker better output nodes classifiers set number classes toi classifiers trained scratch joint classifiers use two fully connected layers unless otherwise specified first fully connected layer joint classifier output nodes visualization network architectures source cnn source classifier joint classifier bna shown supplementary material terms training parameters used figure task involving bna googlenet use batch size fixed learning rate step learning rate chosen trained network converge reasonable amount time set weight softmax loss loss step respectively losses comparable numerical values step trained iterations training parameters adopt default ones used training bvlc googlenet model imagenet classification task unless otherwise specified details training parameters experiments supplementary material general adopt default training parameters used training bna either mnist imagenet classification tasks caffe squeezenet implementation unless otherwise specified method accuracy table performance comparison domain adaptation task report best overall accuracy table listed methods except use training data without access mnistm training data still achieve accuracy comparable competing methods even outperform task performance without applying method baseline sensor fusion compare naive fusion method predicting label highest probability crgb sec experimental result first compare baseline four domain adaptation tasks adapting gray scale rgb images involving result summarized table numbers represent per class accuracy darker cells column represent better classification accuracy task table middle four rows represent performance data directly related letter classification tasks table shows regardless data use significantly outperforms baseline source find similarity task data related performance improvement baseline example task using data outperforms using consistent intuition letters semantically similar digits compared clothing second table compare performance references baselines obtain performance references fully supervised methods train classifier bna table domain using training data labels domain bna lenet train classifier scratch bnas classifier way described sec training task get two fully supervised classifiers source target domains respectively baseline task directly feed testing images obtain exp method training modality testing modality number classes googlenet googlenet googlenet rgb rgb rgb rgb selected scene ids introduced sec table performance comparison different numbers classes scene classification reported numbers classification accuracy color cell reflects performance ranking column darker color means better performance represents pairs existing methods task considered one standard experiments recent works although fair comparison access taskrelevant training data find reach accuracy comparable methods even outperform supports promising method training data unavailable third test efficacy zdda tasks constructed adapting depth rgb images compare zdda baseline different scene classification tasks changing number scenes involved result summarized table list training testing modalities method also list scene ids introduced sec involved task darker cells represent better accuracy column simplicity use refer experiment specified exp section fully supervised methods depth domain zdda outperforms baseline due extra information brought pairs find listed tasks outperforms consistent intuition source representation constrained loss counterpart learned without constraint given simulated target representation fully supervised method rgb domain outperforms baseline domain adaptation access rgb training data unavailable performance improvement caused training procedure well extra training pairs perform similarly supports simulated target representation similar real one test consistency performance zdda training testing validation validation method modality modality splits class choices rgb rgb rgb rgb classes folds table validation zdda performance mean classification accuracy different splits choices classes scene classification stands googlenet definition representation cell color column table compared baseline perform following three experiments first conduct cross validation different splits classification second perform validation different selections classes classification experiment selected scenes introduced sec third validate zdda performance different base network architectures results first two experiments presented table extended version supplementary material result third experiment shown table results experiments consistent observations table table table table classification accuracy reported condition training testing data let zdda robust noisy naive fusion accuracy improvement figure performance comparison two sensor fusion methods black images noisy images compare classification accuracy naive fusion different noise levels rgb depth testing data shows outperforms naive fusion conditions method training modality testing modality bna data rgb domain still train fusion model performance degrades smoothly noise increases addition using black images noise model evaluate trained joint classifier using another noise model adding black rectangle random location size clean image testing time result supplementary material also supports outperforms naive fusion method although use black images noise model training time expect adding different noise models improve performance robustness bna bna bna rgb rgb rgb rgb table validation zdda performance different base network architectures bna scene classification reported numbers classification accuracy stand googlenet alexnet squeezenet respectively definition representation cell color column table conclusion future work propose deep domain adaptation zdda novel approach perform domain adaptation sensor fusion need targetdomain training data inaccessible reality key idea use data simulate representations learning pairs experimenting mnist nist emnist sun datasets show zdda outperforms baselines sensor fusion even without training data task adapting mnist zdda even outperform several methods require access mnistm training data believe zdda straightforwardly extended handle tasks interest modifying loss functions figure step share template zdda training procedure two examples extensions supplementary material input train step noisy training data use ptrain explained sec evaluate classification accuracy different noise conditions rgb depth testing data result presented figure figure outperforms naive fusion method figure conditions performance improvement shown figure figure figure show performance degradation caused noisy depth testing data larger caused noisy rgb testing data supports trained classifier relies depth domain traditionally training fusion model requires training data modalities however show without training references haeusser frerix mordvintsev cremers associative domain adaptation iccv pages hoffman gupta darrell learning side information modality hallucination cvpr pages iandola han moskewicz ashraf dally keutzer squeezenet accuracy fewer parameters model size arxiv preprint arxiv jia shelhamer donahue karayev long girshick guadarrama darrell caffe convolutional architecture fast feature embedding arxiv preprint arxiv koniusz tas porikli domain adaptation mixture alignments scatter tensors cvpr pages krizhevsky sutskever hinton imagenet classification deep convolutional neural networks nips pages lecun bottou bengio haffner gradientbased learning applied document recognition proceedings ieee motiian piccirilli adjeroh doretto unified deep supervised domain adaptation generalization iccv pages ngiam khosla kim nam lee multimodal deep learning icml pages saenko kulis fritz darrell adapting visual category models new domains eccv pages saito ushiku harada asymmetric unsupervised domain adaptation icml sener song saxena savarese learning transferrable representations unsupervised domain adaptation nips pages sohn liu zhong yang chandraker unsupervised domain adaptation face recognition unlabeled videos iccv pages song lichtenberg xiao sun rgbd scene understanding benchmark suite cvpr pages sun saenko deep coral correlation alignment deep domain adaptation eccv workshops pages szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper convolutions cvpr pages tzeng hoffman darrell saenko simultaneous deep transfer across domains tasks iccv pages tzeng hoffman saenko darrell adversarial discriminative domain adaptation cvpr pages venkateswara eusebio chakraborty panchanathan deep hashing network unsupervised domain adaptation cvpr pages wang dai gool deep domain adaptation geodesic distance minimization iccv pages alexnet model http caffemodel accessed googlenet model http caffemodel accessed lenet architecture caffe tutorial https squeezenet model https caffemodel accessed aljundi tuytelaars lightweight unsupervised domain adaptation convolutional filter reconstruction eccv workshops pages arbelaez maire fowlkes malik contour detection hierarchical image segmentation tpami blitzer foster kakade domain adaptation approach technical report technological institute toyota bousmalis silberman dohan erhan krishnan unsupervised domain adaptation generative adversarial networks cvpr pages chen liao chuang hsu sun show adapt tell adversarial training crossdomain image captioner iccv pages cohen afshar tapson van schaik emnist extension mnist handwritten letters arxiv preprint arxiv deng dong socher feifei imagenet hierarchical image database cvpr pages ganin lempitsky unsupervised domain adaptation backpropagation icml volume pages ganin ustinova ajakan germain larochelle laviolette marchand lempitsky domainadversarial training neural networks journal machine learning research gebru hoffman recognition wild domain adaptation approach iccv pages ghifary kleijn zhang balduzzi deep reconstruction classification networks unsupervised domain adaptation eccv pages gretton smola huang schmittfull borgwardt covariate shift local learning distribution matching pages mit press cambridge usa grother hanaoka nist special database handprinted forms characters department commerce gupta hoffman malik cross modal distillation supervision transfer cvpr pages wen afzal zhang chen compact dnn approaching accuracy classification domain adaptation cvpr pages wulfmeier bewley posner addressing appearance change outdoor robotics adversarial domain adaptation iros xiao rasul vollgraf novel image dataset benchmarking machine learning algorithms arxiv preprint arxiv yan ding wang zuo mind class weight bias weighted maximum mean discrepancy unsupervised domain adaptation cvpr pages yang ramesh chitta madhvanath bernal luo deep multimodal representation learning temporal data cvpr pages yang hospedales domain adaptation via kernel regression grassmannian bmvc workshop differential geometry computer vision zhang ogunbona joint geometrical statistical alignment visual domain adaptation cvpr pages zhang david gong curriculum domain adaptation semantic segmentation urban scenes iccv pages
1
control jan masashi wakaiki hideki abstract consider control linear systems show event trigger measures difference control input leads positive minimum time feedback operator compact moreover certain natural assumptions system show exists event trigger system exponentially stable key words control systems stabilization ams subject classifications introduction control one control methods intervals data transmissions determined predefined condition data result network energy resources consumed data necessary control addition applications analysis synthesis control interesting theoretical viewpoint event triggering interacts dynamics way different usual periodic control schemes existing studies control developed systems researchers recently extended systems systems output delays hyperbolic systems secondorder parabolic systems paper consider following system state space input space generator strongly continuous semigroup bounded linear operator extrapolation space infinitedimensional systems described abstract evolution equation results periodic control systems generalized number papers see instead periodic control updating generate control input scheme based difference inf positive constant bounded linear operator since small leads frequent updates control input would expect eventtriggered feedback system exponentially stable sufficiently small generates exponentially stable semigroup one fundamental problems consider whether intuition correct addition stability minimum time inf submitted editors date funding work supported jsps kakenhi grant numbers graduate school system informatics kobe university nada kobe hyogo japan wakaiki sano masashi wakaiki hideki sano guaranteed positive otherwise infinitely many events might occur finite time phenomenon called zeno behavior see makes eventtriggered control schemes infeasible practical implementation guarantee zeno freeness unique theoretical specification control systems minimum time extensively investigated see example case shown inf satisfies inf however true case illustrated example reason use eventtrigger based difference control input see section time sequence constructed satisfies inf feedback operator compact discussion minimum time section analyze exponential stability feedback system section case input operator bounded introducing norm state space respect semigroup generated contraction provide sufficient condition event trigger parameter exponential stability feedback system moreover certain assumption semigroup obtain another sufficient condition exponential stability obtain former result via approach key element latter result application lyapunov stability theorem section study case unbounded input operators first focus system unstable part feedback operator stabilizes unstable part act residual stable part case feedback operator specific structure achieve exponential stability feedback system using less conservative event triggers finitedimensional part second assume semigroups generated suitable domains analytic exponentially stable respectively assumptions show every compact feedback operator exist periodic event triggers achieving exponential stability notation terminology denote set nonnegative integers define res let banach spaces let denote space bounded linear operators set let operator domain denoted spectrum subset let denote restriction namely strongly continuous semigroup exponential growth bound denoted say strongly continuous semigroup exponentially stable feedback system system let time sequence satisfy inf control systems let denote state space input space hilbert spaces let denote norm inner product respectively space stands completion norm element resolvent set clearly consider following system generator strongly continuous semigroup input operator feedback operator satisfy say bounded otherwise unbounded semigroup extended strongly continuous semigroup generator extension shall use symbols original ones associated extensions abstract evolution equation define function recursively periodic case considering semigroup find standard theory strongly continuous semigroups given satisfies additionally following differential equation holds since defined satisfies properties say unique solution abstract evolution equation definition exponential stability system exponential stable exist given satisfies call constant upper bound decay rate system minimum times call inf minimum time value zero event trigger may update input infinitely fast realized digital platform section show following event triggers guarantee positive time certain assumptions inf inf masashi wakaiki hideki sano whereas use event triggers measure difference inputs event triggers systems based difference state inf semigroup uniformly continuous one show time event trigger positive however systems described uniformly continuous semigroups contain practical systems hand following simple example event triggers based difference state lead inf finite time example let hilbert space square integrable functions usual inner product consider shift operator strongly continuous semigroup discussed remark strongly stable set initial state assume define time sequence inf follows similarly define time sequences monotonically increasing converge thus minimum time inf zero define operator buds following lemma useful discussion minimum time lemma sup moreover compact lim control systems compact operators next lemma also known lemma let hilbert spaces let given compact lim lemmas obtain following result lemma assume compact set every exists proof since leads follows lemma lim since strongly continuous obtain lim hence follows lemma lim thus every exists combining obtain desired result lemma see minimum time inf positive event trigger theorem assume compact system set time sequence recursively satisfying every time sequence satisfies inf proof follows every therefore see lemma every exists since every time sequence satisfies inf completes proof masashi wakaiki hideki sano next investigate minimum time event trigger addition lemma need following estimate lemma assume compact set exist semigroup satisfies exist proof suppose first holds since exists follows therefore exists kst kst combining obtain kst therefore holds conversely holds using discussion find exists completes proof remark suppose bounded lim hence compactness required lemma remark condition appears also theorem applicability lyapunov stability theorem shown corollary satisfies range dense extended strongly continuous group control systems see lemmas event trigger leads positive minimum time strongly continuous semigroup satisfying theorem assume compact semigroup satisfies system set time sequence recursively satisfying every time sequence satisfies inf proof combining lemmas every exists thus time sequence satisfies inf stability analysis bounded control section analyze stability case input operator bounded feedback operator compact semigroup generated exponentially stable feedback system exponentially stable provided parameter sufficiently small fixing first set time sequence recursively satisfying min inf theorem assume generates strongly continuous semigroup compact assume exists semigroup tbf generated satisfies ktbf set time sequence parameter satisfying kbkb system exponentially stable decay rate defined log min kbkb inf proof theorem theorem introduce new norm defined sup tbf follows tbf kxk kxk masashi wakaiki hideki sano hand kxk tbf sup tbf hence kxk kxk moreover sup tbf sup tbf sup tbf sup tbf noting see routine calculation see exercise appendix written tbf tbf since condition leads sup using properties obtain kbkb sup defined satisfies since satisfies follows therefore defined satisfies control systems particular using recursively obtain thus see completes proof remark general difficult find constant satisfying tbf expanded riesz basis characterize constant following way let riesz spectral operator definition simple eigenvalues corresponding eigenvectors exist positive constants every every scalars let eigenvectors biorthogonal shown proof theorem tbf satisfies tbf ktbf thus constant satisfying given particular orthogonal basis hence following example illustrates result theorem example consider metal rod length one heated along length temperature rod addition heat along bar time position respectively reformulate equation abstract evolution equation hilbert space square integrable functions usual inner product also introduce operators domain masashi wakaiki hideki sano similarly example optimal control cost functional dxdt given sin orthonormal basis since follows feedback operator bounded moreover show compact indeed define operator find parseval equality therefore obtain since lim follows operator uniformly converges hence feedback operator compact using expansion domain control systems see tbf follows discussion remark semigroup tbf satisfies ktbf follows theorem parameter satisfies system event trigger exponential stable next set time sequence recursively show exponential stability system event trigger instead approach theorem apply lyapunov stability theorem theorem assume generates strongly continuous semigroup compact assume semigroup satisfies semigroup tbf generated exponential stable set time sequence parameter satisfying tbf tbf xdt bkb system exponentially stable decay rate defined bkb ktbf proof exists ktbf obtain since tbf satisfies following identity see theorem tbf xds tbf follows ktbf ktbf xkds kbf kxk therefore ktbf kxk masashi wakaiki hideki sano min kbf since tbf exponential stable follows theorem exists positive operator following lyapunov inequality holds operator given using theorem exist since ktbf ktbf follows choose ktbf assume mild solution also classical solution namely satisfies differential equation moreover define error event trigger therefore using find every every satisfies bkb bkb defined see positive definiteness hence finally show exponential stability initial states fix arbitrarily let solution abstract evolution equation initial state since every lemma control systems depends continuously initial state sense exists constant moreover since dense follows every exists solution abstract evolution equation initial state satisfies therefore since arbitrary obtain thus feedback system exponentially stable decay rate remark satisfy ktbf obtain therefore rewritten kbkb kbkb stability analysis unbounded control throughout section consider unbounded input operator provide two eventtriggered control schemes exponential stabilization feedback system first approach based system decomposition second one employs periodic event trigger developed control based system decomposition system decomposition follows shall place number assumptions system recall decomposition systems unbounded control used assumption exists consists finitely many eigenvalues finite algebraic multiplicities assumption holds decompose standard technique see lemma follows exists rectifiable closed simple curve intersecting containing interior exterior operator defined masashi wakaiki hideki sano traversed counterclockwise direction projection operator decompose decomposition satisfies dim invariant define note generate semigroups respectively semigroup extended strongly continuous semigroup extrapolation space generator extension symbols used denote extensions since spectrum operator equal spectrum operator projection operator defined extended projection considered operator similar satisfies using extended projection operator decompose control operator since completions endowed norm identify see footnote also decompose feedback operator addtion assumption impose following assumptions assumption exponential growth satisfies assumption controllable remark define operator abf abf abf domain abf shown generates analytic semigroup exists compact operator semigroup generated abf exponentially stable assumptions hold control systems control every define set feedback operator control input given section set time sequence system exponentially stable decay rate example theorem set time sequence recursively satisfying min inf similarly theorem use following event trigger min inf since finite dimensional obtain less conservative conditions parameter feedback system stable particular obtain following result event trigger proposition consider system event trigger assume input space finite dimensional set exist positive matrices positive scalar following linear matrix inequalities feasible system exponential stable decay rate similarly theorem prove proposition proof found appendix theorem let assumptions hold assume feedback gain time sequence chosen system exponentially stable decay rate exist infinitedimensional system exponentially stable decay rate every proof setting every every masashi wakaiki hideki sano unique solution hence exists hand follows every every since since strongly continuous semigroup follows lemma every exists sup therefore note number events occur hence satisfies moreover every exists note every exists every control systems since follows thus obtain since arbitrary system exponentially stable decay rate every stability periodic event triggers theorem feedback operator specific structure contrast assume generates analytic semigroup use periodic event trigger proposed see every compact feedback operator exists periodic event trigger system exponential stability fixing set time sequence min min call event trigger periodic event trigger time sequence satisfies every theorem assume generates analytic semigroup compact semigroup generated abf exponentially stable exist every every system periodic event trigger exponentially stable proof theorem exists every operator defined power stable exist fix let every periodic event trigger lemma follows hence proof theorem counterpart theorem masashi wakaiki hideki sano define error every every hence introduce new norm defined sup norm following properties kxk kxk periodic event trigger error satisfies combining obtain ksh ksh choose parameter namely ksh define min log applying obtain control systems lemma exists therefore see lkx thus system periodic event trigger exponentially stable decay rate remark easily seen proof theorem conclusion holds following event trigger based difference state min min remark theorem assume generates analytic semigroup compact assumptions used existence sampling periods respect periodic system exponentially stable replace different assumptions corollary example illustrate control method theorem beam structural damping let denote space time variables assume beam hinged one end beam freely sliding clamped end end suppose shear force applied dynamics beam given lateral deflection beam time location along beam damping constant abstract evolution equation beams recall results developed sec abstract evolution equation form beam expansions generator input operator respect certain basis introduce operator domain masashi wakaiki hideki sano consider state space hilbert space inner product define operator domain introduce state vector rewrite form case using technique set input operator denotes dirac distribution support let next obtain expansions respect riesz basis eigenvalues given associate eigenvectors defined sin eigenvector associated eigenvalue define see eigenvectors riesz basis furthermore eigenvectors associated eigenvalues biorthogonal thus riesz spectral operator definition follows theorem ifn control systems moreover infinitesimal generator analytic semigroup given ifn noting also schauder basis expand following way numerical simulation since see assumption holds every set system decomposition sec check assumptions case subspace spanned using basis expansions rewrite clearly controllable thus assumptions satisfied let set feedback gain similarly rewrite respect basis following way see proposition parameter satisfies system event trigger exponential stable decay rate fig illustrates time responses beam initial states cos apply event trigger parameters approximate state space linear span computation time responses fig shows time responses close control control observe fig event trigger reduce number control updates appendix derivation assume hilbert space integrable functions usual norm consider equation bounded input operator since tbf satisfies tbf xds tbf masashi wakaiki hideki sano event triggered control control control time position norm event triggered control control time input fig time response shown theorem obtain tbf tbf tbf tbf tbf tbf control systems hand fubini theorem also tbf tbf tbf tbf therefore tbf tbf tbf tbf thus obtain appendix proof proposition simplicity notation omit superscript define error obtain define lyapunov function using lyapunov inequality obtain moreover see satisfies masashi wakaiki hideki sano since condition holds condition arbitrary follows since positive definite matrix see feedback system exponential stable decay rate completes proof references borgers heemels properties control systems ieee trans automat control curtain oostveen necessary sucient conditions strong stability distributed parameter systems systems control curtain zwart introduction linear systems theory new york springer dolk borgers heemels decentralized dynamic control guaranteed performance ieee trans automat control donkers heemels control guaranteed improved decentralized ieee trans automat control espitia girard marchand prieur control linear hyperbolic systems conservation laws automatica goebel sanfelice teel hybrid dynamical systems ieee control syst heemels donkers periodic control linear systems automatica heemels donkers teel periodic control linear systems ieee trans automat control russell admissible input elements systems hilbert space carleson measure criterion siam control jiang cui zhuang control distributed parameter systems using mobile sensor actuator comput math lehmann lunze control communication delays packet losses int control logemann stabilization systems dynamic sampleddata feedback siam control logemann rebarber townley stability systems trans amer math logemann rebarber townley generalized stabilization wellposed linear systems siam control pazy applicability lyapunov theorem hiilbert space siam math pazy semigroups linear operators applications partial differential equations new york springer rebarber townley generalized sampled data feedback control distributed parameter systems systems control letters rebarber townley nonrobustness stability systems sample hold ieee trans automat control rebarber townley robustness respect sampling stabilization riesz spectral systems ieee trans automat control selivanov fridman distributed control diffusion semilinear pdes automatica tabuada scheduling stabilizing control tasks ieee trans automat control
3
nov diverse accurate image description using variational additive gaussian encoding space liwei wang alexander schwing svetlana lazebnik aschwing slazebni university illinois abstract paper explores image caption generation using conditional variational autoencoders cvaes standard cvaes fixed gaussian prior yield descriptions little variability instead propose two models explicitly structure latent space around components corresponding different types image content combine components create priors images contain multiple types content simultaneously several kinds objects first model uses gaussian mixture model gmm prior second one defines novel additive gaussian prior linearly combines component means show models produce captions diverse accurate strong lstm baseline vanilla cvae fixed gaussian prior showing particular promise introduction automatic image captioning challenging conditional generation task captioning techniques based recurrent neural nets term memory lstm units take input feature representation provided image trained maximize likelihood reference human descriptions methods good producing relatively short generic captions roughly fit image content unsuited sampling multiple diverse candidate captions given image ability generate candidates valuable captioning profoundly ambiguous image described many different ways also images hard interpret even humans let alone machines relying imperfect visual features short would like posterior distribution captions given image estimated model accurately capture nature language uncertainty depicted image achieving diverse image description major theme several recent works deep generative models natural fit goal date generative adversarial models gans attracted attention dai proposed jointly learning generator produce descriptions evaluator assess well description fits image shetty changed training objective generator reproducing captions generating captions indistinguishable produced humans paper also explore generative model image description unlike training adopt conditional variational cvae formalism starting point work jain trained vanilla cvae generate questions given images training time given image sentence cvae encoder samples latent vector gaussian distribution encoding space whose parameters mean variance come gaussian prior zero mean unit variance vector fed decoder uses together features input image generate question encoder decoder jointly trained maximize upper bound likelihood reference questions conference neural information processing systems nips long beach usa predicted object labels person cup donut dining table predicted object labels cup fork knife sandwich dining table mouse woman sitting table cup person sitting table cup table two plates donuts cup woman sitting table plate man sitting table plate food close plate food table table plate food plate food sandwich white plate topped plate food plate food table next cup lstm baseline close table two plates close table plate food close plate food table close table two plates food close table plates food lstm baseline close plate food table close plate food sandwich close plate food close plate food white plate close plate food sandwich figure example output proposed approach compared lstm baseline see section details method show top five sentences following consensus captions produced method diverse accurate object labels person sentences man woman standing room man woman playing game man standing next woman room man standing next woman field man standing next woman suit object labels person bus sentences man woman sitting bus man woman sitting train man woman sitting bus man woman sitting bench man woman sitting bus object labels person remote sentences man woman playing video game man woman playing video game man woman playing video game man woman playing game remote woman holding nintendo wii game controller object labels person train sentences man woman sitting train woman woman sitting train woman sitting train next train woman sitting bench train man woman sitting bench figure illustration additive latent space structure controls image description process modifying object labels changes weight vectors associated semantic components latent space turn shifts mean vectors drawn modifies resulting descriptions intuitive way given images test time decoder seeded image feature different samples multiple result multiple questions jain obtained promising question generation performance cvae model equipped fixed gaussian prior task image captioning observed tendency learned conditional posteriors collapse single mode yielding little diversity candidate captions sampled given image improve behavior cvae propose using set gaussian priors latent space different means standard deviations corresponding different modes types image content concreteness identify modes specific object categories dog dog cat detected image would like encourage generated captions capture starting idea multiple gaussian priors propose two different ways structuring latent space first represent distribution vectors using gaussian mixture model gmm due intractability gaussian mixtures vae framework also introduce novel additive gaussian prior directly adds multiple semantic aspects space image contains several objects aspects corresponding means latent space require mean encoder distribution close weighted linear combination respective means cvae formulation additive gaussian prior able model richer flexible encoding space resulting diverse accurate captions illustrated figure additional advantage additive prior gives interpretable mechanism controlling captions based image content shown figure experiments section show outperform lstms vanilla cvae baselines challenging mscoco dataset showing marginally higher accuracy far best diversity controllability background proposed framework image captioning extends standard variational conditional variant briefly set necessary background variational vae given samples dataset vaes aim modeling data likelihood end vaes assume data points cluster around manifold parameterized embeddings encodings obtain sample corresponding embedding employ decoder often based deep nets since decoder posterior tractably computable approximate distribution referred encoder taking together ingredients vaes based identity log dkl log dkl relates likelihood conditional hard compute dkl posterior readily available decoder distribution use deep nets however choosing encoder distribution sufficient capacity assume dkl small thus know lower bound log maximized encoder decoder parameters conditional variational cvae tasks like image captioning interested modeling conditional distribution desired descriptions representation content input image vae identity straightforwardly extended conditioning encoder decoder distributions training encoder decoder proceeds maximizing lower bound conditional log log dkl parameters decoder distribution encoder distribution respectively practice following stochastic objective typically used max log dkl approximates expectation log using samples drawn approximate posterior typically single sample used backpropagation encoder produces samples achieved via reparameterization trick applicable restrict encoder distribution gaussian mean standard deviation output deep net gaussian mixture prior additive gaussian prior key observation behavior trained cvae crucially depends choice prior prior determines learned latent space structured kldivergence term encourages encoder distribution given particular description image content close prior distribution vanilla cvae formulation one adopted prior dependent fixed gaussian choice computationally convenient experiments sec demonstrate task image captioning resulting model poor diversity worse accuracy standard lstm clearly prior change based content image however need efficiently compute closed form still needs simple structure ideally gaussian mixture gaussians motivated considerations encourage latent space structure composed modes clusters corresponding different types image content given image assume obtain distribution entries nonnegative sum one current work concreteness identify set object categories reliably detected automatically car person mscoco dataset conduct experiments direct supervision categories note however formulation general applied definitions modes clusters including latent topics automatically obtained unsupervised fashion model gaussian mixture weights components means standard deviations defined weights represents mean vector component practice components use standard deviation decoder decoder switch cluster vector cluster vector encoder encoder figure overview models sample vectors given image switches one cluster center another encourages embedding image close average objects means directly tractable optimize gmm prior therefore approximate divergence stochastically step training first draw discrete component according cluster probability sample resulting gaussian component dkl log log plug term obtain objective function optimize encoder decoder parameters using stochastic gradient descent sgd principle prior parameters also trained obtained good results keeping fixed means drawn randomly standard deviations set constant explained section test time order generate description given image first sample component index sample corresponding component distribution one limitation procedure image contains multiple objects individual description still conditioned single object would like structure space way directly reflect object cooccurrence end propose simple novel conditioning mechanism additive gaussian prior image contains several objects weights corresponding means latent space want mean encoder distribution close linear combination respective means weights spherical covariance matrix figure illustrates difference model model introduced order train model using objective need compute dkl prior given analytic expression derived dkl log log plug term obtain stochastic objective function training encoder decoder parameters initialize mean variance parameters way keep fixed throughout training log log reconstruction loss log log wck lstm lstm lstm lstm image feature cluster vector lstm lstm lstm lstm lstm lstm image feature cluster vector lstm figure illustration encoder left decoder right see text details next need specify architectures encoder decoder shown fig encoder uses lstm map image vector caption point latent space specifically lstm receives image feature first step cluster vector second step caption word word hidden state last step transformed mean vectors log variances log using linear layer summed weights respectively generate desired encoder outputs note encoder used training time input cluster vectors produced ground truth object annotations decoder uses different lstm receives input first image feature cluster vector vector sampled conditional distribution next receives start symbol proceeds output sentence word word produces end symbol training inputs derived ground truth encoder used encourage reconstruction provided caption test time ground truth object vectors available rely automatic object detection explained section experiments implementation details test methods mscoco dataset largest clean image captioning dataset available date current release contains training validation images five reference captions many captioning works data enlarge training set follow split released allocates images training validation testing features image features use activations network cluster object vectors corresponding mscoco object categories training time consist binary indicators corresponding ground truth object labels rescaled sum one example image labels person car dog results cluster vector weights corresponding objects zeros elsewhere test images obtained automatically object detection train faster detector mscoco categories using split net test time use threshold confidence scores output detector determine whether image contains given object weights equal baselines lstm baseline obtained deleting vector input decoder architecture shown fig gives strong baseline comparable google show tell generate different candidate sentences using lstm use beam search width second baseline given vanilla cvae fixed gaussian prior following completeness report performance method well baselines without cluster vector input parameter settings training lstms use encoding vocabulary size number words training set input gets projected word embedding layer dimension lstm hidden space dimension found lstm settings worked well models three models cvae use dimension space wanted least equal number categories make sure vector corresponds unique set cluster weights means clusters randomly initialized unit ball obj std beam lstm cvae cvae agx cvae table oracle upper bound performance according metric obj indicates whether object cluster vector used number samples std standard deviation beam beam width beam search used caption quality metrics short cider rouge meteor spice obj std beam lstm cvae cvae agx cvae table consensus using cider see caption table legend changed throughout training standard deviations set training time tuned validation set test time values used results reported tables networks trained sgd learning rate first epochs reduced half every epochs average models converge within epochs results big part motivation generating diverse candidate captions prospect able using discriminative method performance method quality best candidate caption set first evaluate different methods assuming oracle choose best sentence among candidates next realistic evaluation use consensus approach automatically select single top candidate per image finally assess diversity generated captions using uniqueness novelty metrics oracle evaluation table reports caption evaluation metrics oracle setting taking maximum relevant metric candidates compare caption quality using five metrics bleu meteor cider spice rouge calculated using mscoco caption evaluation tool augmented author spice lstm baseline report scores attained among candidates generated using beam search suggested cvae sample fixed number vectors corresponding prior distributions numbers samples given table trend vanilla cvae falls short even lstm baseline upperbound performance considerably exceeds lstm given beam unique novel size per image sentences lstm cvae cvae agx cvae table diversity evaluation method report percentage unique candidates generated per image sampling different numbers vectors also report percentage novel sentences sentences seen training set top sentences following consensus noted cvae novel sentences get roughly novel sentences obj std predicted object labels predicted object labels open refrigerator filled lots food refrigerator filled lots food drinks refrigerator filled lots food large open refrigerator filled lots food refrigerator filled lots food items man standing next brown horse man standing next horse person standing next brown white horse man standing next horse man man holding brown white horse lstm baseline refrigerator filled lots food refrigerator filled lots food top refrigerator filled lots food inside refrigerator filled lots food inside refrigerator filled lots food items lstm baseline close person horse close horse horse black white photo man wearing hat black white photo person wearing hat black white photo man hat predicted object labels predicted object labels bed person holding umbrella front building woman holding red umbrella front building person holding umbrella rain man woman holding umbrella rain man holding red umbrella front building baby laying bed blanket woman laying bed baby man laying bed baby baby laying bed blanket baby laying bed cat lstm baseline man holding umbrella city street man holding umbrella rain man holding umbrella rain person holding umbrella rain man holding umbrella rain umbrella lstm baseline baby laying bed blanket baby laying bed animal little girl laying bed blanket little girl laying bed blanket man laying bed blanket figure comparison captions produced method lstm baseline method top five captions following consensus shown right choice standard deviation large enough number samples obtains highest upper bound big advantage cvae variants lstm easily used generate candidate sentences simply increasing number samples way lstm increase beam width computationally prohibitive detail top two lines table compare performance lstm without additional object cluster vector input show make dramatic difference improving lstm baseline matter adding stronger conditioning information input similarly cvae using object vector additional conditioning information encoder decoder increase accuracy somewhat account improvements see one thing noticed models without object vector sensitive standard deviation parameter require careful tuning demonstrate table includes results several values cvae models consensus evaluation realistic evaluation next compare models consensus specifically given test image first find nearest neighbors training set embedding space learned network proposed take reference captions neighbors calculate consensus scores candidate captions use cider metric based observation give evaluations bleu object labels cat suitcase black white cat sitting suitcase cat sitting suitcase cat sitting suitcase cat sitting top suitcase black white cat sitting suitcase cat sitting suitcase table small white black cat sitting top suitcase cat sitting piece luggage small gray white cat sitting suitcase white cat sitting top suitcase black white cat sitting suitcase black white cat sitting top suitcase cat sitting table black white cat sitting next suitcase cat sitting front suitcase cat sitting wooden bench sun close cat sitting suitcase cat sitting top blue suitcase large brown white cat sitting top suitcase cat sitting top suitcase white cat suitcase object labels cup dining table teddy bear teddy bear sitting next teddy bear teddy bear sitting table next table teddy bear sitting top table teddy bear sitting table next cup teddy bear sitting next table teddy bear sitting table teddy bear sitting next table filled animals teddy bear sitting table ateddy bear sitting table next teddy bear white teddy bear sitting next table couple animals sitting table teddy bear sitting next bunch flowers couple teddy bears sitting table large teddy bear sitting table bunch animals sitting table group teddy bears sitting table large teddy bear sitting table next table teddy bear sitting next pile books group teddy bears sitting next white teddy bear sitting wooden table two teddy bears sitting next couple teddy bears sitting next white teddy bear sitting next table teddy bear sitting next wooden table large animal sitting top table object labels cat suitcase chair cat sitting suitcase cat sitting top suitcase cat sitting suitcase floor black white cat sitting suitcase close cat suitcase white black cat sitting suitcase cat sitting chair white black cat sitting top suitcase black white cat sitting chair cat sitting chair room large brown white cat sitting top desk cat sitting wooden bench sun close cat sitting suitcase black white cat sitting next piece luggage small white black cat sitting chair black white cat sitting top suitcase cat sitting top blue chair cat sitting top suitcase object labels cup dining table teddy bear sandwich cake teddy bear sitting next teddy bear teddy bear sitting table next cup teddy bear sitting table teddy bear teddy bear teddy bear sitting top teddy bear sitting top table teddy bear sitting next cup table teddy bear teddy bear teddy bear sitting table next glass two teddy bears sitting table next table topped cake couple cake sitting top table table cake bunch animals cake bunch white teddy bear sitting next glass table cake bear table bunch teddy bears table two plates food table topped variety food table two teddy bears table cake plate food couple sandwiches sitting top table table topped cake two plates food table bunch cakes table cake cup white plate food next table white table topped lots food figure comparison captions produced two different versions input object vectors images models draw samples show resulting unique captions table shows evaluation based single sentence test image performance get near upper bounds table numbers follow similar trend achieving better performance baselines almost metrics also noted goal outperform state art absolute terms performance actually better best methods date although trained different split tends get slightly higher numbers although advantage smaller results table one important still big gap performance improving candidate sentences important future direction diversity evaluation compare generative capabilities different methods report two indicative numbers table one average percentage unique captions set candidates generated image number meaningful cvae models sample candidates drawing different samples multiple result caption lstm candidates obtained using beam search definition distinct table observe cvae little diversity much better decisive advantage similarly also report percentage generated sentences test set seen training set really makes sense assess novelty sentences plausible compute percentage based top sentences per image consensus based novelty ratio cvae well however since generates fewer distinct candidates per image absolute numbers novel sentences much lower see table caption details qualitative results figure compares captions generated lstm baseline four example images captions tend exhibit diverse sentence structure wider variety nouns verbs used describe image often yields captions accurate open refrigerator refrigerator better reflective cardinality types entities image captions mention person horse lstm tends mention one even manage generate correct candidates still gets right number people candidates shortcoming detected objects frequently end omitted candidate sentences lstm language model accommodate bear backpack one hand shows capacity lstm decoder generate combinatorially complex sentences still limited hand provides robustness false positive detections controllable sentence generation figure illustrates output models changes change input object vectors attempt control generation process consistent table observe number samples produces unique candidates flexible responsive content object vectors first image showing cat add additional object label chair able generate captions mentioning chair similarly second example add concepts sandwich cake generate sentences capture still controllability leaves something desired since observed trouble mentioning two three objects sentence especially unusual combinations discussion experiments shown proposed approaches generate image captions diverse accurate standard lstm baselines similar accuracies according table clear edge terms diversity unique captions per image controllability quantitatively table qualitatively figure related work date cvaes used image question generation far know work first apply captioning mixture gaussian prior used cvaes colorization approach essentially similar though based mixture density networks uses different approximation scheme training cvae formulation advantages cgan approach adopted recent works aimed general goals gans expose control structure latent space additive prior results interpretable way control sampling process gans also notoriously tricky train particular discrete sampling problems like sentence generation dai resort reinforcement learning shetty approximate gumbel sampler cvae training much straightforward represent space simple vector space multiple modes possible impose general graphical model structure though incurs much greater level complexity finally viewpoint inference work also related general approaches diverse structured prediction focus extracting multiple modes single energy function hard problem necessitating sophisticated approximations prefer circumvent cheaply generating large number diverse plausible candidates good enough ones identified using simple mechanisms future work would like investigate general formulations conditioning information necessarily relying object labels whose supervisory information must provided separately sentences obtained example automatically clustering nouns noun phrases extracted reference sentences even clustering vector representations entire sentences also interested tasks question generation cluster vectors represent question type many etc well image content control output modifying vector would case particularly natural acknowledgments material based upon work supported part national science foundation grants sloan foundation would like thank jian peng yang liu helpful discussions references https anderson fernando johnson gould spice semantic propositional image caption evaluation eccv batra yadollahpour shakhnarovich diverse solutions markov random fields eccv bishop mixture density networks chen fang lin vedantam gupta zitnick microsoft coco captions data collection evaluation server arxiv preprint dai lin urtasun fidler towards diverse natural image descriptions via conditional gan iccv denkowski lavie meteor universal language specific translation evaluation target language proceedings eacl workshop statistical machine translation deshpande yeh forsyth learning diverse image colorization cvpr devlin cheng fang gupta deng zweig mitchell language models image captioning quirks works arxiv preprint devlin gupta girshick mitchell zitnick exploring nearest neighbor approaches image captioning arxiv preprint farhadi hejrati sadeghi young rashtchian hockenmaier forsyth every picture tells story generating sentences images eccv hershey olsen approximating kullback leibler divergence gaussian mixture models icassp hochreiter schmidhuber long memory neural computation jain zhang schwing creativity generating diverse questions using variational autoencoders cvpr jang poole categorical reparameterization iclr johnson duvenaud wiltschko datta adams structured vaes composing probabilistic graphical models variational autoencoders nips kingma welling variational bayes iclr kiros salakhutdinov zemel multimodal neural language models icml kulkarni premraj ordonez dhar choi berg berg babytalk understanding generating simple image descriptions ieee transactions pattern analysis machine intelligence kuznetsova ordonez berg berg choi generalizing image captions parallel corpus acl lin rouge package automatic evaluation summaries text summarization branches proceedings workshop volume barcelona spain liu zhu guadarrama murphy improved image captioning via policy gradient optimization spider iccv mao yang wang huang yuille deep captioning multimodal recurrent neural networks iclr mitchell han dodge mensch goyal berg yamaguchi berg stratos iii midge generating image descriptions computer vision detections proceedings conference european chapter association computational linguistics pages association computational linguistics papineni roukos ward zhu bleu method automatic evaluation machine translation acl association computational linguistics ren girshick sun faster towards object detection region proposal networks nips shetty rohrbach hendricks fritz schiele speaking language matching machine human captions adversarial training iccv simonyan zisserman deep convolutional networks image recognition arxiv preprint sohn lee yan learning structured output representation using deep conditional generative models nips vedantam lawrence zitnick parikh cider image description evaluation cvpr vijayakumar cogswell selvaraju sun lee crandall batra diverse beam search decoding diverse solutions neural sequence models arxiv preprint vinyals toshev bengio erhan show tell neural image caption generator cvpr vinyals toshev bengio erhan show tell lessons learned mscoco image captioning challenge ieee transactions pattern analysis machine intelligence wang lazebnik learning deep embeddings cvpr wang xiao zhang zhuang diverse image captioning via grouptalk ijcai kiros cho courville salakhudinov zemel bengio show attend tell neural image caption generation visual attention icml jin wang fang luo image captioning semantic attention cvpr
1
happy travelers take big pictures psychological study machine learning big data xuefeng liang lixin yuen peng yang liu song tong ist graduate school informatics kyoto university kyoto japan nokia technologies tampere finland centre image signal processing university malaya kuala lumpur malaysia sep xliang lohyuenpeng figure wide view angle narrow view angle view angle travel photo influenced photographer subconscious emotional state living example psychological theory abstract psychology researches usually conducted extensive laboratory experiments yet rarely tested disproved big data paper make use travel photos traveler ratings test influential theory suggests positive emotions broaden one visual attention core hypothesis examined study positive emotion associated wider attention hence sites would trigger photographs analyzing travel photos find strong correlation preference photos high rating tourist sites tripadvisor able carry analysis use deep learning algorithms classify photos wide narrow angles present study exemplar big data deep learning used test laboratory findings wild introduction recent advances technologies especially deep learning conjunction big data offer psychologists unprecedented opportunity test theories outside laboratory cognitive scientists psychologists increasingly embracing big data machine learning significantly understanding human behavior cognition example sequential dependence functions cognition investigated millions online reviews posted yelp vinson dale jones machine learning model trained standard corpus online text resulted semantic biases caliskan bryson narayanan emerging studies demonstrated big data naturally occurring data sets bonds used complement traditional laboratory paradigms refine theories griffiths goldstone lupyan jones paxton griffiths following footsteps earlier calls action present example leveraging machine learning techniques bonds complement test psychological theories concretely investigate real world scenario travelers photo taking behavior influenced hypothesized psychological mechanism namely theory positive emotions fredrickson fredrickson branigan according fredrickson influential theory positive emotions broaden globalize attentional scope observer result processing global picture negative emotions correlate narrowed localized attentional focus induce processing local elements psychological hypothesis supported extensive laboratory experiments rowe hirsh anderson tamir robinson pourtois schettino vuilleumier vanlessen widely employed flanker task required participants respond globallocal visual processing task visual stimuli either compatible geometric figures letters incompatible ones see supplementary details however best knowledge theory tested realworld big data moreover imprudent embrace theories blindly since traditional psychological experiments often conducted restricted laboratory environment limited number subjects may result considerable bias order scrutinize theory travel photo taking scenario first develop deep learning algorithm performance sync human subsequently photographers behaviors analyzing big data address confounding factors set carefully designed experiments results demonstrate travel photographers inclination specific camera viewpoint figures figures largely influenced photographers emotion time photo taking kinds influence might subconscious photographers nevertheless statistically consistent significant roughly speaking photographers seem prefer photos ones high rating tourist sites lower rating sites preference appears moderate even going reverse direction see fig experiments details finding accord notion positive emotions broaden attention trigger photographs moreover study demonstrates substantial boost numbers diversity experimental subjects taking advantages machine learning techniques vast amount behavior data already available internet challenging traditional laboratory paradigms hope set experiments well proposed deep learning algorithm new method added psychologists toolbox addition methods adopted work potential significance realworld applications discovering obscure highvalue tourist sites zhuang preventing mental illness special populations mining social media data stewart davis materials methods discuss data methods employed investigate hypothesis specifically detail criteria procedures tourist sites selection photo collection followed proposed machine learning algorithms tourist sites selection test theory using bonds studied travel photos selected tourist sites hosted tripadvisor https selection based five criteria popularity recommended top search engines tripadvisor national geographic travel leisure objectivity least votes site regardless language age gender nationality etc generality located across asia europe americas diversity keeping site types diverse possible avoid religious places independence appropriate distance sites avoid based available locations sites selected travel photos associated sites used study targets see supplementary details figure illustrates positions sites distribution suitable candidates green curve fig shows samples sites photo datasets table datasets estimation machine learning methods psychological experiments name dtrain dtest photos source flickr tripadvisor tripadvisor used three newly collected datasets study shown table estimation proposed machine learning methods training data dtrain collected flickr according aforementioned sites whereas testing data dtest made evenly distributed amount photos tourist sites randomly collected tripadvisor reason choosing data different sources twofold avoid overlap training testing datasets photos hosted tripadvisor uploaded travelers rated tourist sites thus siteratings photo contents would closely related due second consideration created dataset consists travel photos taken tourist sites collected tripadvisor without overlapping dtest test hypothesized correlation tourists positive emotion choice photos third set called consists random photos collected dataset thomee without using keywords photos used test preferences behaviors completely random neutral emotion mode nevertheless raw data collected completely uncontrolled manner following rectification procedures applied photos firstly scrutinized photos erroneously tagged meaningless duplicated photos noises filtered dataset secondly selfies eliminated datasets due intrinsic ambiguities attention photos one persons well rectifying data dataset labeled estimation machine learning algorithms build training dataset dtrain testing dataset dtest photos manually labeled either recruited five subjects male mean age designed binary classification task task narrowangle photos demonstrated let subjects correct understanding task photos rows columns simultaneously shown screen give better visual comparison subject classified photos two categories procedure iteratively carried photos checked collecting batch results removed ambiguous photos less trend selfies relatively recent cultural phenomenon fast becoming integral trend everyday people also travelers though different intuition current study interest look future figure samples tourist sites across world view samples california usa figure selected sites superimposed number distribution suitable sites green curve sites similar ratings aligned vertically consistent votes way consists photos total almost perfect agreement fleiss kappa among five subjects used ground truth estimation proposed methods methods order effectively test hypothesis large dataset developed two machine learning models classification section gives detailed account designs evaluations analysis said models figure example photos hvs cues determine view angle focus lens model narrowangle spatially large conceptually small object spatially large conceptually large object spatially small conceptually small object hvs model first model mimics basics human visual system hvs determining viewpoints formulated two cues focus cue scale cue focus cue based finding large number professionally shot view photos adhere focus lens model hvs tsotsos focuses center object focus surrounding background blurred fringe shown fig model transform images frequency domain using contourlet transform nsct cunha zhou surf features bay tuytelaars van gool extracted quantized using fisher vector perronnin mensink afterwards classification implemented trained support vector machine svm however many photos shot cameras smart phones follow focus model entire scene appears sharp fig therefore scale cue derived observers ability differentiate views measuring size objects namely spatial size object size measured photo indicated boxes fig bigger one fig conceptual size realistic proportion object person fig small object building fig big object referring fig determined object spatially large conceptually small otherwise photo measure spatial size object bounding box proposal method namely adobe refined bing boxes fang whereas conceptual size measured convolutional neural network cnn krizhevsky sutskever hinton hence hvs model built following two specific visual cues human vision address distinct photo characteristics model secondly looked deep learning technique using single cnn perform view angle classification opposed hvs model account success shown cnn discovering high level features variety tasks donahue zeiler fergus yosinski lee however conventional cnns utilize single high level feature multiple layers convolution according pilot investigation features crucial view angle classification may vanish multiple convolution pooling operations conventional cnns therefore designed cumulative feature cnn extracts features stage figure architecture proposed cumulative feature cnn features convolution layer cumulated one representation cumulates one representation hence incorporating low high level features classification task figure illustrates architecture model travel photos inputs outputs respective narrow wide angle categorization specifically introduced additional convolution paths convx existing convolution conv layer produce features new convx layers placed pooling layers pool conv followed pool pooling layers added poolx convx shown path illustration fig use pooling size pool poolx kernel sizes conv transfered alexnet architecture krizhevsky sutskever hinton convx kernel follows size feature map convolved feature map hence kernel size mapped neurons convx layers trained conv layers thus kernels expected focus significant features different levels hence directly summed obtain cumulative feature proceed subsequent fully connected layers classification used wide narrow angle photos dtrain training remaining validation training process training image augmented resizing shorter side dimensions maintaining aspect ratio random cropping flipping performed followed normalization subtracting average image dataset finally dimension image fed network model trained using stochastic gradient descend approach training batch size weight decay learning rate logarithmically reduces every training epoch additionally transferred imagenet weights alexnet improve generalization main feature extraction layers model yosinski order prevent training stopped epochs significant reduction trend validation error ference validation training errors acceptable range validation performance achieved thus proceed perform later classification experiments using model performance evaluation analysis performances hvs models evaluated dataset dtest travel photos sites collected tripadvisor achieved overall classification accuracy major improvement comparison hvs model reached table shows outperforms hvs model sites additionally show table ratio photos based classification disregarding accuracy closely matches ratio ground truth indication trained better hvs approach considerable likeness human therefore used testing theory real world data table classification results hvs cnn models narrow wide angle photos tourist sites site total hvs model narrow wide accuracy model narrow wide accuracy table comparison ratio ground truth hvs model model site hvs model follows figure examples narrow top wide bottom angle photos high activation regions bright areas also take look explore contributing factors performance visualizing last activation maps highest level features network find spatial location photos responsible classification features lower level layers convx visualized known less abstract features like edges high frequency details specifically extract activation maps produced last pooling operation test image dimension maps shown fig max pooling performed extracted maps third dimension obtain aggregated map dimension resized size original image final map used mask luminance channel original image obtain visualization area features used classification operation given interesting insight wide narrow view angle classification task mainly border image major contributor classification opposed objects initially thought figure shows several examples activations fringe image even though objects within image clearly shown irrespective viewpoints interesting finding suggests strong classification achieved looking image fringe instead objects goes beyond focus cue scale cue designed hvs model believe one component missing hvs model caused experiments hypothesize prominent tourist sites induces positive emotions travelers subsequently prompt capture photos ones test theory behaviors structure analysis lay simple linear regression proportion photos rating score approximated size tourist site respective parameters estimated offset model derived theory based two assumptions emotions experiments reported considered represented traveler ratings tripadvisor scope attention naturally unconsciously manifested choice photos note competing factor included model might also affect choice paper adopt pearson correlation coefficients pcc quantify compare influences respect see tables first model fitted tourist sites elaborated supplementary optimal fitting reached parameters note relative low model indicates certain amount data explained model order look influential predictor conduct following two experiments experiment test respect aim experiment assess emotions induced different tourist sites would affect choice travel photos travel photos dataset tourist sites classified cfcnn figure plots proportion photos site table pearson correlation coefficients pccs proportion photos sites pcc asia europe americas results figure shows notable correlation proportions photos across world pearson correlation coefficient pcc indicates strong dependent preference thus deem principal predictor model surprisingly observation consistent theory investigate influence local region culture sites classified three subgroups according asia europe americas shown table fig trends proportion photos pccs three subgroups similar joint group conceivably influence local regions cultures negligible experiment test respect choice view angle travel photos may also affected figure strong correlation proportions photos tourist sites across world analogous correlations appear across asia europe americas modest correlation proportions photos size tourist site people naturally inclined take photos location open space large object interest vice versa hence size site could confounding factor shown model aim experiment assess relation preferences behaviors end define according size interest location sites obvious object interest refer physical sizes meters statues buildings object available estimate size region interest meters sites extremely open space mountains canyons seashores sizes capped see supplementary details table pearson correlation coefficients pccs proportion photos sites pcc sites small sites medium sites hand positive emotions reinforce tendency happy excited photographer take photos regardless modulation effect line theory tested laboratory also suggests visual attention result multiple factors experiment test random photos linear regression model discloses influences exerted emotions photo taking behaviors aim experiment assess default behavior case completely random mode neutral emotion therefore site independent dataset randomly collected dataset thomee without using geotag keywords photos classified statistical analysis subset flickr containing million data travel photos vast diversity see supplementary example photos large sites results figure table illustrate modest correlation pcc size site proportion photos correlation noticeably weaker margin pcc proportion photos since fig shows sites unevenly distributed according look factor separate three subgroups namely small sites medium sites large sites specifically sites size meters small group sites size meters large group others make medium group calculate pcc group list table results show even weaker correlations three subgroups view dwarf influence respect reinforces hypothesis examination note interplay human emotion behavior twofold one hand open spaces large objects make easy take photos secondary factor figure proportions photos lower rating sites average random data high rating sites average order comparison emotional states choose sites whose ratings higher high rating sites according site distribution fig line rule statistics whereas sites rating lower termed lower rating sites results turns proportion photos green bar fig random data reaches approximately close investigation random photos revealed vast majority photos photos everyday life proportion although slightly favour photos reveals statistically normal behavior composing wide narrow angle photos since random photos supposedly taken neural emotion particular ratio serves reference compared ratios estimated mood states high rating sites average proportion approximates apparent lower rating sites moreover average proportion lower rating sites closely resembles ratio random photos conjecture similarity ascribed neutral emotion associated lower rating sites sites unable induce positive emotions subsequently travelers behaviors influenced positive manner hand significant associated high rating sites induces behavior via broadened visual attention another finding greater deviation view proportions lower rating sites high rating ones signifies good sites share consistent ability induce positive emotions tourists lacking lower rating sites conclusion discussion work tested psychological theory outside laboratory leveraging recent machine learning methods big data internet study revealed strong correlation preference photos high rating tourist sites preference ascribed notion positive emotions broaden visual attention trigger photo compositions alternatively neutral emotion induces slight favor photos likely associated lower rating sites addition controlling condition result suggests visual attention result multiple factors able carry analysis development deep learning algorithm photo view angle classification achieves performance sync human hope set experiments well proposed algorithm new method added psychologists toolbox moreover methods adopted work potential significance applications example recent researches focusing discovering new tourism resources mining text evaluating picture quality sns however tried link tourists experiences mood states data particularly image data theory support real world big data study add new measure task boost tourism economics mental welfare heathcare filed researchers also reviewing big data resources use characterise applications address mental illness suicide prevention side theory negative emotions induce narrowed attention machine learning method help special populations better lives mining sns data references bay tuytelaars van gool bay tuytelaars van gool surf speeded robust features european conference computer vision springer caliskan bryson narayanan caliskan bryson narayanan semantics derived automatically language corpora contain biases science cunha zhou cunha zhou nonsubsampled contourlet transform theory design applications ieee transactions image processing donahue donahue jia vinyals hoffman zhang tzeng darrell decaf deep convolutional activation feature generic visual recognition international conference machine learning fang fang cao xiao zhu yuan adobe boxes locating object proposals using object adobes ieee transactions image processing fredrickson branigan fredrickson branigan positive emotions broaden scope attention repertoires cognition emotion fredrickson fredrickson theory positive emotions philosophical transactions royal society biological sciences goldstone lupyan goldstone lupyan discovering psychological principles mining naturally occurring data sets topics cognitive science griffiths griffiths manifesto new computational cognitive revolution cognition jones jones developing cognitive theory mining naturalistic data big data cognitive science krizhevsky sutskever hinton krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advances neural information processing systems lee lee chan mayo remagnino deep learning extracts learns leaf features plant classification pattern recognition paxton griffiths paxton griffiths finding traces behavioral cognitive processes big data naturally occurring datasets behavior research methods perronnin mensink perronnin mensink improving fisher kernel image classification european conference computer vision springer pourtois schettino vuilleumier pourtois schettino vuilleumier brain mechanisms emotional influences perception attention magic biological psychology rowe hirsh anderson rowe hirsh anderson positive affect increases breadth attentional selection proceedings national academy sciences stewart davis stewart davis big datain mental health research current status emerging possibilities social psychiatry psychiatric epidemiology tamir robinson tamir robinson happy spotlight positive mood selective attention rewarding information personality social psychology bulletin thomee thomee shamma friedland elizalde poland borth new data multimedia research communications acm tsotsos tsotsos computational perspective visual attention mit press vanlessen vanlessen rossi raedt pourtois positive emotion broadens attention focus decreased spatial encoding early visual cortex evidence erps cognitive affective behavioral neuroscience vinson dale jones vinson dale jones decision contamination wild sequential dependencies yelp review ratings proceedings annual meeting cognitive science society yosinski yosinski clune bengio lipson transferable features deep neural networks advances neural information processing systems yosinski yosinski clune nguyen fuchs lipson understanding neural networks deep visualization arxiv preprint zeiler fergus zeiler fergus visualizing understanding convolutional networks european conference computer vision springer zhuang zhuang liang yoshikawa anaba obscure sightseeing spots discovering system international conference multimedia expo
1
practical combinations data djamal fabio travis nicola mathieu cerist algeria max planck institute molecular cell biology genetics dresden germany department computer science university helsinki finland helsinki institute information technology finland department mathematics computer science university udine italy laboratoire bordelais recherche informatique cnrs bordeaux france apr abstract collections strings increasingly amassed genome sequencing genetic variation experiments well storing versions files like webpages source code existing indexes locating exact occurrences pattern string take advantage single measure repetition however multiple distinct measures repetition grow sublinearly length string paper explore practical advantages combining data structures whose size depends distinct measures repetition main ingredient structures encoded bwt rlbwt takes space proportional number runs transform string describe range practical variants combine rlbwt set boundaries factors string take space proportional number factors variants use respectively rlbwt string rlbwt reverse one rlbwt inside bidirectional index one rlbwt support unidirectional extraction also study practical advantages combining rlbwt compact directed acyclic word graph string data structure takes space proportional number extensions maximal repeats approaches easy implement provide competitive tradeoffs significant datasets acm subject classification data structures pattern matching keywords phrases repetitive strings locate count encoded bwt factorization cdawg digital object identifier introduction locating exact occurrences string massive collection similar texts fundamental primitive era genomes multiple related species multiple strains species multiple individuals sequenced increasing pace data structures designed work partially supported academy finland grant center excellence cancer genetics research belazzougui licensed creative commons license conference important topics cvit editors john open joan acces article leibniz international proceedings informatics schloss dagstuhl informatik dagstuhl publishing germany practical combinations data structures repetitive collections take space proportional specific measure repetition example number factors parsing number runs transform previous work showed achieve competitive theoretical tradeoffs space time locate queries combining data structures depend distinct measures repetition measures grow sublinearly length string specifically described data structure takes approximately words space reports occurrences pattern length string length log log log pocc socc log log time pocc socc number primary secondary occurrences respectively see section compares favorably occ log reporting time indexes height parse tree also compares favorably space solutions based encoded bwt rlbwt suffix array samples take words space achieve log log occ log log reporting time sampling rate also introduced data structure whose size depends number maximal repeats reports occ occurrences pattern log log occ time main component constructions rlbwt use counting number occurrences pattern combine cdawg data structures indexes rather suffix array samples answering locate queries paper engineer range practical variants approaches compare tradeoffs representative set indexes repetitive collections including rlcsa number implementations recent implementation hybrid index one indexes based rlbwt factors uses amount memory comparable indexes answers count queries two four orders magnitude faster hybrid index implementations long patterns index uses less space hybrid index answers locate queries one two orders magnitude faster number implementations fast fastest implementation short patterns index based rlbwt cdawg answers locate queries four ten times faster version rlcsa uses comparable memory extremely short patterns index achieves speedups even greater ten respect rlcsa preliminaries strings let integer alphabet let separator let string denote reverse set starting positions string circular version set repeat string satisfies repeat respectively iff respectively iff maximal repeat repeat say maximal repeat rightmost respectively leftmost string respectively string reasons space assume reader familiar notion suffix tree stt tree define denote equivalently label edge denote belazzougui string label node well known string respectively iff internal node stt respectively iff internal node stt since closed prefix operation bijection set maximal repeats set nodes suffix tree lie paths start root end nodes labelled rightmost maximal repeats symmetrically since closed suffix operation bijection set maximal repeats set nodes suffix tree lie paths start root end nodes labelled leftmost maximal repeats compact directed acyclic word graph denoted cdawgt follows minimal compact automaton recognizes set suffixes seen minimization stt leaves merged node sink represents nodes except sink correspondence maximal repeats source corresponds empty string set accepting nodes consists sink maximal repeats also occur suffix like suffix tree transitions labelled substrings since maximal repeat corresponds subset suffixes cdawgt built putting equivalence class nodes stt belong maximal unary path explicit weiner links note also subgraph stt induced maximal repeats isomorphic spanning tree cdawgt reasons space assume reader familiar notion uses transform including array mapping backward search paper use bwtt denote bwt use range denote lexicographic interval string bwt implicit context say bwtt run iff bwtt moreover substring bwtt either contains least two distinct characters well known repetitions induce runs bwtt example bwt consists runs length least run length one denote number runs bwtt call encoded bwt denoted rlbwtt representation bwtt takes words space supports rank select operations see since difference negligible practice simplify notation denote implicit context factorization abbreviated rest paper greedy decomposition defined follows assume virtually preceded set distinct characters alphabet assume already computed prefix length longest prefix satisfies rest paper drop subscripts whenever clear context string indexes compressed suffix array denoted rlcsat follows consists compressed rank data structure bwtt sampled suffix array denoted ssat given pattern use rank data structure find interval bwtt contains characters precede occurrences length interval uncompressed number occurrences locate specific occurrence start character precedes bwtt use rank queries move backward reach character whose position sampled cvit practical combinations data structures thus average time locating occurrence inversely proportional size ssat fast locating needs large ssa regardless compressibility dataset suggested ways reduce size ssa perform well enough real repetitive datasets authors include software released reasons space assume reader familiar indexes see recall primary occurrence pattern one crosses ends phrase boundary factorization occurrences called secondary computed primary occurrences locating socc secondary occurrences reduces range reporting takes socc log log time data structure words space locate primary occurrences use data structure range reporting grid marker factor lexicographic order preceded text lexicographically reversed prefix ending phrase boundary data structure takes words space returns phrase boundaries immediately followed factor specified range immediately preceded reversed prefix specified range time number phrase boundaries reported ukkonen used two patricia trees one factors reversed prefixes ending phrase boundaries locate primary occurrences query first tree range distinct factors lexicographic order start query second tree range reversed prefixes starting ranges returned trees correct iff factor starts reversed prefix phrase boundary starts check first range choose factor range compare first characters check second range choose reversed prefix range compare first characters takes time every thus time total assuming compressed replacing uncompressed text augmented compressed representation store log space later given find occ occurrences log occ log log time know advance patterns length store substrings consisting characters within distance nearest phrase boundary use find primary occurrences approach called hybrid indexing proposed several times recently see references therein details composite string indexes possible combine rlbwtt set starting positions factors building data structure takes words space reports pocc primary occurrences pattern log log time since data structure core paper summarize works follows primary occurrence cover boundaries two factors thus consider every possible way placing inside rightmost boundary two factors every possible split two parts either factor proper prefix factor every use range reporting queries list occurrences conform split described section encode sequence implicitly follows use bitvector last last iff sat belazzougui iff sat last position factor represent bitvector predecessor data structure partial ranks using words space let stt suffix tree let set loci stt factors consider list node labels sorted lexicographic order easy build data structure takes words space implements log time function returns possibly empty interval see together last rlbwtt rlbwtt data structure output construction given first perform backward search rlbwtt determine number occurrences number zero stop backward search store table interval bwtt every compute interval bwtt every using backward search rlbwtt last last never ends last position factor discard value otherwise convert interval last last reversed prefixes end last position factor rank operations last implemented log log time using predecessor queries get lexicographic interval list distinct factors using operation log time use intervals query range reporting data structure also possible combine rlbwtt cdawgt building data structure takes words space reports occ occurrences log log occ time number maximal repeats specifically every node cdawg store variable recall arc cdawg means maximal repeat obtained extending maximal repeat right left thus every arc cdawg store first character variable store length right extension implied variable length left extension implied computed every arc cdawg connects maximal repeat sink store starting position string total space used cdawg words number runs bwtt shown well alternative construction could use cdawgt rlbwtt use rlbwt count number occurrences log log time number zero use cdawg report occ occurrences occ time using technique already sketched specifically since know occurs perform blind search cdawg typically done patricia trees keep variable initialized zero stores length prefix matched far keep variable initialized one stores starting position inside last maximal repeat encountered search every node cdawg choose arc constant time using hashing increment increment search leads sink arc report stop search ends node associated maximal repeat determine occurrences performing traversal nodes reachable cdawg updating variables described reporting every arc leads sink total number nodes arcs reachable occ cvit practical combinations data structures combining rlbwt factors practice combination rlbwt factorization described section exploring range practical variants decreasing size specifically addition version described section call full follows implement variant drop rlbwtt simulating bidirectional index call variant bidirectional follows variant drop rlbwtt range reporting data structure subset suffix tree nodes call variant light follows another variant use sparse version parsing call sparse follows moreover design number practical optimization speed locate queries see appendix implement variants using representation rlbwt spaceefficient one described recall latter encoded follows store one character per run string mark one beginning run bitvector vall every store lengths runs character consecutively specifically every length represented representation allows one map rank access queries bwtt rank select access queries vall bitvectors representation takes log log bits space reduce multiplicative factor term log storing vall one ones arbitrary constant easy see still able answer queries rlbwt using vectors reconstruct positions missing ones vall using log log bits space query times multiplied factor experiments set represent string sdsl bitvectors sdsl full index first variant engineered version data structure described section store rlbwtt rlbwtt bitvector end log bits marks rank among suffixes every suffix last position factor symmetrically bitvector begin log bits marks rank among suffixes every suffix first position factor geometric range data structures implemented wavelet trees sdsl range data structure supports locating primary occurrences storing permutation factors sorted lexicographically order induced corresponding ones end words every character wavelet tree lexicographic rank factor among factors locating primary occurrences need label every point range data structure text position allocate log bits rather log bits every label using label rank corresponding one array begin thus data structure takes log bits space range data structure stores points whose coordinates locating secondary occurrences every source code implementations available based sdsl library belazzougui point labeled rank corresponding one array begin wavelet tree implements range data structure store points whose set coordinates map text coordinates domain using bitvector takes log bits space coordinates could repeated since two factors could share source start point keep track duplicates using succinct bitvector takes bits space coordinate repeated times encoded overall range data structure takes log bits space finally need way compute lexicographic range string among factors implement simpler strategy one proposed specifically recall factors substrings equivalently nodes suffix tree recall also bwt intervals two nodes suffix tree either disjoint contained one another sort bwt intervals factors order induced traversal suffix tree two distinct nonempty intervals iff iff contained data structure sorted array intervals takes log bits given bwt interval string find lexicographic range among sorted distinct factors log time follows using order described finding intervals strictly smaller starting first interval greater equal find intervals equal contained intervals factors either requires one binary search since intervals contiguous summary full index takes log log log bits space supports count queries log log time locate queries occ log time bidirectional index drop rlbwtt simulate using rlbwtt applying synchronization step performed bidirectional bwt indexes see references therein strategy penalizes time complexity locate queries becomes quadratic length pattern moreover since implementation store separately character synchronization step requires rank queries find number characters smaller given character inside bwt interval operation could performed log time string represented wavelet tree summary bidirectional variant index takes log log log bits space supports count queries log log time supports locate queries log occ log time light index sparsification computed interval pattern bwtt locate primary occurrences characters occurrence inside sequence first positions intervals sorted array thus use gap encoding save log bits space adds multiplicative factor log query times clarity describe simpler version intervals encoded integers log bits cvit practical combinations data structures range every primary occurrence pattern overlaps last position factor implement forward extraction select queries rlbwtt approach requires rlbwtt range data structure bitvector endt marks last position every factor text bitvector endbw marks last position every factor bwtt integers log bits connecting corresponding ones endbw endt array plays role sparse suffix array sampling used rlcsa reduce space even sparsifying factorization intuitively factorization collection strings similar much denser inside inside thus excluding long enough contiguous regions factorization outputting factors inside regions could reduce number factors dense regions formally let consider following denoted factor xzd yzd size factorization longest prefix xzd yzd appears least twice make index described section work need sample suffix array lexicographic ranks correspond last position every need redefine primary occurrences fully contained inside locate need extract additional characters occurrence pattern order locate primary occurrences start inside range data structure must also built sources factors xzd implementation index takes log log log bits space answers locate queries occ log time count queries log log time combining rlbwt cdawg practice combination rlbwt cdawg described section study effect two representations cdawg memory first representation graph encoded sequence integers every integer represented sequence bytes seven least significant bits every byte used encode integer significant bit flags last byte integer nodes stored sequence according topological order graph obtained cdawg inverting direction arcs encode pointer node successor cdawg store difference first byte first byte sequence sink difference replaced shorter code choose store length maximal repeat corresponds node rather offset inside every arc since lengths short smaller number arcs practice second encoding exploit fact subgraph suffix tree induced maximal repeats spanning tree cdawgt see section specifically encode spanning tree balanced parenthesis scheme described resolve arcs cdawg belong tree using corresponding tree operations operations work node identifiers thus need convert node identifiers version considered paper obtained setting requiring text virtually preceded distinct characters source code implementations available belazzougui first byte byte sequence cdawg vice versa encode monotone sequence first byte nodes byte sequence using representation elias fano uses log bits per starting position number bytes byte sequence finally observe classical cdawg construction algorithms online algorithm described design algorithms build representation cdawg bwtt bwtt using optimal additional space specifically let enumerateleft function returns set distinct characters appear bwtt necessarily lexicographic order prove following lemmas appendix lemma let given representation bwtt answers enumerateleft time tel per element output time tlf build topology cdawgt well first character length label arc randomized tel tlf time zero space addition input output lemma let given representation bwtt answers enumerateleft time tel per element output time tlf build topology cdawgt well first character length label arc randomized tel tlf time bits space addition input output experimental results test implementations five dna datasets pizza chili repetitive corpus include whole genomes approximately strains eukaryotic species saccharomyces cerevisiae saccharomyces paradoxus plots collection approximately thousand substrings genome bacterium respectively escherichia coli haemophilus influenzae artificially repetitive string obtained concatenating hundred mutated copies substring human genome denoted plots compare results index implementation sdsl sampling rate represented black circles plots implementation sampling rates triangles plots five variants implementation index described squares recent implementation compressed hybrid index diamonds index uses rrr bitvectors wavelet tree brevity call implementation index uses suffix trie reverse trie process pattern length measure maximum resident set size number cpu seconds process spends user compressing files implementation large window makes uncompressed files times bigger corresponding compressed files compile sequential version turned thus bitvector rather succinct bitvector used mark sampled positions suffix array block size psi vectors bytes perform experiments single core ghz intel xeon processor access ram running centos measure resources gnu time compile gcc cvit practical combinations data structures locate count queries discarding time loading indexes averaging measurements one thousand observe two distinct regimes locate queries corresponding short patterns shorter approximately long patterns respectively figures full bidirectional light index implementations red circles plots achieve new useful tradeoff dataset neither short long patterns expected running time per pattern bidirectional index depends quadratically pattern length observe superlinear growth light index well optimizations described appendix red dots effective bidirectional index effectiveness increases pattern length manage shave running time patterns length size bidirectional index disk average smaller size full index disk size light index disk approximately smaller size bidirectional index disk experiment skipping characters opening new phrase light index sparsification green plots size sparse index skip rate disk approximately smaller size light index disk short patterns memory used sparse index becomes smaller rlcsa comparable index running time per occurrence one two orders magnitude greater index comparable rlcsa sampling rates equal greater figure top long patterns however sparse index becomes one two orders magnitude faster variants index except variant using comparable memory function pattern length running time per occurrence sparse index grows slowly running time suggesting sparse index becomes fast patterns length figure top sparse index approximately orders magnitude slower hybrid index since size hybrid index depends maximum pattern length sparse index becomes smaller hybrid index patterns length possibly even shorter figure top expected sparse index faster index hybrid index count queries especially short patterns specifically sparse index two four orders magnitude faster variants index largest difference patterns length figure bottom difference sparse index variant shrinks pattern length increases similar trends hold hybrid index full bidirectional light indexes show similar count times sparse index disk size cdawg comparable disk size rlcsa sampling rate figure bottom using succinct representation cdawg blue dots plots shaves disk size resident set representation blue circles however using representation shaves time succinct representation depending dataset pattern length using cdawg answer locate queries achieve new tradeoff long patterns figure bottom however short patterns running time per occurrence cdawg times smaller running time per occurrence version rlcsa uses comparable memory patterns length two cdawg achieves speedups even greater generate random patterns contain characters using genpatterns tool pizza chili corpus belazzougui figure tradeoffs indexes color state art black top row patterns length bottom row patterns length clarity rlcsa sparse index tested also additional configurations mentioned section figure traeoffs cdawg blue compared rlcsa triangles sampling rate patterns length left right cvit practical combinations data structures figure locate time per occurrence top count time per pattern bottom function pattern length sparse index skip rate index hybrid index count plots show also index rlcsa figure top disk size sparse index skip rate compared hybrid index maximum pattern length index rlcsa sampling rate bottom disk size cdawg compared rlcsa sampling rate belazzougui acknowledgements thank miguel providing implementations variants described daniel valenzuela providing implementation described future work designing indexes repetitive texts increasingly active field beyond scope paper review recent proposals see especially since implemented would like draw attention index described simultaneously independently gagie however think improved made competitive practice index intended collections many similar strings databases genomes species main idea choose one strings reference build relative parse entire dataset respect reference treating phrases using reference reverse auxiliary data structures apply dynamic programming quickly compute ways given pattern decomposed suffix phrase possibly empty sequence complete phrases prefix phrase possibly empty possible dictionary potential phrases substrings reference fixed albeit large change parse contrast dictionaries using parse auxiliary data structures quickly find whether possible decompositions pattern occur parse dataset occurrences correspond occurrences pattern cross phrase boundaries quickly find occurrences pattern dataset unfortunately many distinct phrases parse may compress well raises question reduce number distinct phrases without increasing number phrases much suppose build compressed bruijn graph collection strings bruijn graph collapsed every maximal path whose internal nodes assign edges distinct consider string walk graph edge walk crosses replacing substring causing cross edge edge results parse number distinct phrases number edges graph assuming strings collection length least notice also pattern length least pattern corresponds one walk uncompressed graph may start finish middle edges compressed graph walk pattern occur collection using bruijn graph also removes need choosing reference using range reporting may able improve compression first removing edges original bruijn graph compressing second replacing substrings cause cross edges sufficiently long edge labels compressed bruijn graph modification pattern may appear collection even correspond walk uncompressed graph however substring causes cross edge long edge label compressed graph certainly replace substring edge follows pattern length least need perform four searches parse determine whether pattern occurs collection cvit practical combinations data structures plan implement test modification gagie index soon report results future paper acknowledgements thank miguel providing implementations variants described daniel valenzuela providing implementation described references diego arroyuelo gonzalo navarro kunihiko sadakane stronger based compressed text indexing algorithmica djamal belazzougui linear time construction compressed text indices compact space proceedings annual acm symposium theory computing pages acm djamal belazzougui fabio cunial travis gagie nicola prezza mathieu raffinot composite data structures combinatorial pattern matching pages springer anselm blumer janet blumer david haussler ross mcconnell andrzej ehrenfeucht complete inverted files efficient text retrieval analysis journal acm timothy chan kasper green larsen mihai orthogonal range searching ram revisited proceedings annual symposium computational geometry pages acm maxime crochemore christophe hancart automata matching patterns handbook formal languages pages springer maxime crochemore renaud direct construction compact directed acyclic word graphs alberto apostolico jotun hein editors cpm volume lecture notes computer science pages springer huy hoang jesper jansson kunihiko sadakane sung fast relative similar sequences theoretical computer science peter elias richard flower complexity simple retrieval problems journal acm jacm hector ferrada travis gagie tommi hirvola simon puglisi hybrid indexes repetitive datasets philosophical transactions royal society london paolo ferragina gonzalo navarro pizza chili repetitive corpus http accessed travis gagie pawel gawrychowski juha yakov nekrich simon puglisi faster proceedings conference language automata theory applications pages travis gagie gawrychowski juha yakov nekrich simon puglisi faster pattern matching latin theoretical informatics pages springer travis gagie pawel gawrychowski juha yakov nekrich simon puglisi faster pattern matching proceedings latin american symposium theoretical informatics pages simon gog timo beller alistair moffat matthias petri theory practice plug play succinct data structures international symposium experimental algorithms sea pages belazzougui juha esko ukkonen parsing index structures string matching proc south american workshop string processing wsp pages sebastian kreft based master thesis department computer science university chile sebastian kreft gonzalo navarro compressing indexing repetitive sequences theoretical computer science shanika kuruppu simon puglisi justin zobel relative compression genomes storage retrieval proceedings symposium string processing information retrieval pages veli gonzalo navarro succinct suffix arrays based encoding combinatorial pattern matching pages springer veli gonzalo navarro jouni niko storage retrieval highly repetitive sequence collections journal computational biology donald morrison algorithm retrieve information coded alphanumeric journal acm jacm ian munro venkatesh raman succinct representation balanced parentheses static trees siam march url http adam novak convenient repository rlcsa library https accessed igor pavlov home http accessed nicola prezza compressed transform suffix array sampling https accessed nicola prezza compressed transform sparse suffix array sampling https accessed mathieu raffinot replacing sampling cdawg localisation bwt indexing approaches https accessed mathieu raffinot maximal repeats strings jouni niko veli gonzalo navarro compressed indexes superior highly repetitive sequence collections string processing information retrieval international symposium spire melbourne australia november pages yoshimasa takabatake yasuo tabei hiroshi sakamoto improved practical highly repetitive texts proceedings symposium experimental algorithms pages daniel valenzuela chico compressed hybrid index repetitive collections proceedings fifteenth international symposium experimental algorithms sea lecture notes computer science springer june dan willard range queries possible space theta information processing letters jacob ziv abraham lempel universal algorithm sequential data compression ieee transactions information theory information processing letters cvit practical combinations data structures proof lemma use algorithm described enumerate representation every node stt performing traversal suffix link tree algorithm works tel time bits working provides node interval bwtt length label well list children every child interval bwtt first character label edge stt since label every node cdawg maximal repeat set nodes cdawg excluding sink correspondence subset nodes suffix tree specifically node stt corresponds node cdawg substring check counting number distinct characters bwt interval every time number greater one discovered new node cdawg assign unique identifier incrementing global counter scan every child stt store hash table tuple interval used key unique number assigned note every quadruplet insert hash table unique key bwt interval length one leaf arc connects cdawg sink character first character label edge stt compute starting position arcs cdawg batch inverting bwtt querying hash table hash table implemented support insertion querying randomized time bwt inversion takes tlf time amount memory charged output build arcs cdawg directed sink perform another traversal tree order first traversal assume second traversal enumerate node whose bwt interval present hash table child node suffix tree corresponds node cdawg corresponds node cdawg well substring add arc cdawg otherwise thus cdawg must contain arc shortest corresponding node suffix tree reached unary path tree thus keep auxiliary buffer initially empty every time encounter node suffix tree whose interval tuple exists hash table append buffer tuple moreover empty buffer transform every tuple buffer arc cdawg size buffer charged output proof lemma proceed appendix traversing tree enumerating intervals bwtt every node suffix tree every substring detect node also create enumeration algorithm described uses stack trick fit working space bits without trick working space would proportional largest number maximal repeats lie path tree charged output lemma belazzougui new node cdawg assign new unique identifier enumerate push tuple hash table interval bwtt used key unique number assigned bwt interval length one arc connects sink character cdawg compute starting position arcs cdawg batch inverting bwtt querying hash table note since possibly pushing intervals bwtt destinations implicit weiner links suffix tree hash table allow presence distinct tuples key build arcs cdawg directed sink perform another traversal tree order first traversal assume second traversal enumerate node suffix tree whose interval bwtt present hash table set tuples corresponds node cdawg well substring add arc cdawg every tuple otherwise thus cdawg must contain arc every tuple shortest corresponding node suffix tree reached unary path tree thus keep auxiliary buffer initially empty every time encounter node suffix tree whose interval set tuples exists hash table append buffer corresponding set tuples moreover empty buffer transform every tuple buffer arc cdawg size buffer charged output speeding locate queries indexes based rlbwt factors indexes based rlbwt factors locate queries engineered number ways thanks rlbwt know total number occurrences starting locate thus stop locating soon found occurrences could add compressed bitvector first flags position iff sat bwtt could mark positions interval bwtt contains zeros first discard following steps every discarded suffix strategy saves backward search bidirectional index range reporting data structure could use reversed prefixes rather reversed prefixes would allow checking position whether ends position last position factor implement ever stringent filter one could store additional range reporting data structure uses reversed prefixes reverting data structure bidirectional index could quit synchronized backward search bwtt soon find suffix followed contain factor suffix moreover could quit backward search soon detect cvit practical combinations data structures neither factor suffix factor contain factor suffix tests implemented using variations function replacing interval subintervals described every backward step would overhead case bidirectional index could speed backward search perform every precomputing table intervals bwtt bwtt strings length would take log bits additional space log log bits store suffix factor due lack space main paper study effects first two optimizations
8
power data reduction matching george rolf mar school engineering computing sciences durham university institut softwaretechnik und theoretische informatik berlin germany abstract finding matchings undirected graphs arguably one central graph primitives graphs solvable time however several applications running time still slow investigate almost data reduction used preprocessing alleviate situation specifically focus almost kernelization start deeper systematic study general graphs bipartite graphs data reduction algorithms easily comply form preprocessing every solution strategy exact approximate heuristic thus making attractive various settings introduction matching powerful piece algorithmic magic matching given graph one compute set nonoverlapping edges matching arguably among fundamental primitives allowing algorithm specifically graph maximum matching found time improving upper time bound even bipartite graphs resisted decades research recently however duan pettie presented algorithm computes approximate matching running time dependency log unweighted case algorithm micali vazirani implies lineartime case running time dependency take different route first give quest optimal solutions second focus efficient data reduction solving instance significantly shrinking size actually solving context decision problems parameterized algorithmics known kernelization particularly active area algorithmic research problems spirit behind approach thus closer identification efficiently linearly solvable special cases matching quite body work direction instance since augmenting path found linear time standard augmenting algorithm runs time number edges maximum matching yuster developed log algorithm difference maximum minimum vertex degree input graph moreover algorithms computing maximum matchings special graph classes including convex bipartite strongly chordal chordal bipartite graphs general spirit parameterization solvable problems also referred fpt fptp short forms starting point supported postdoc fellowship german academic exchange service daad durham university however focus unweighted case parameter table kernelization results running time kernel size results matching feedback edge number feedback vertex number time time results bipartite matching distance chain graphs time vertices edges vertices edges theorem theorem vertices theorem research remarkably fomin recently developed algorithm compute maximum matching graphs treewidth log randomized time following paradigm kernelization provably effective efficient data reduction provide systematic exploration power data reduction matching thus aim fitting within fptp devise problem kernels computable almost linear time particular motivation efficient kernelization algorithms possible transform multiplicative additive almost fptp algorithms furthermore kernelization algorithms typically based data reduction rules used preprocessing heuristics approximation algorithms goal getting larger matchings kernelization usually defined decision problems use remainder paper decision version matching nutshell kernelization decision problem instance algorithm produces equivalent instance whose size solely function parameter preferably polynomial focus decision problems justified fact results although formulated decision version straightforward way extend corresponding optimization version matching input undirected graph nonnegative integer question size subset nonoverlapping disjoint edges since solving given instance returning trivial always produces kernel polynomial time looking kernelization algorithms faster algorithms solving problem problems kernelization algorithm since running polynomial time presumably faster solution algorithm course longer true applying kernelization solvable problem like matching focus classical kernelization problems mostly improving size kernel particularly emphasize polynomially solvable problems becomes crucial also focus running time kernelization algorithm moreover parameterized complexity analysis framework also applied kernelization algorithm example kernelization algorithm running time problem specific parameter might preferable another one running time paper present kernelization algorithms matching run linear time see sections almost linear time time see section contributions paper present three efficiently computable kernels matching see table overview parameterizations categorized distance triviality motivated follows first note matchings trivially found linear time trees forests consider corresponding edge deletion distance feedback edge number vertex deletion distance feedback vertex number notably trivial algorithm computing feedback edge number approximation algorithm feedback vertex number mention passing parameter vertex cover number feedback vertex number frequently studied kernelization particular gupta peng implicitly provided kernel matching respect parameter vertex cover number coming bipartite graphs note parameterization vertex deletion distance chain graphs motivated follows first chain graphs form one obvious easy cases bipartite graphs matching solved linear time second show vertex deletion distance bipartite graph chain graph linear time moreover vertex deletion distance chain graphs vertex cover number bipartite graph overview main results given table study kernelization matching parameterized feedback vertex number vertex deletion distance forest see section first show subset data reduction rules feedback vertex set kernel also yields computable kernel typically much larger parameter feedback edge number see section bipartite matching faster algorithm known general graphs kernelize bipartite matching respect vertex deletion distance chain graphs see section seen high level two main results employ algorithmic strategy namely function parameter number neighbors appropriate vertex deletion set feedback vertex set deletion set chain graphs respectively achieve develop new irrelevant edge techniques tailored two kernelization problems specifically whenever vertex deletion set large degree efficiently detect edges incident whose removal change size maximum matching remaining graph shrunk data reduction rules approach removing irrelevant edges natural technical details proofs correctness become quite technical combinatorially challenging particular case feedback vertex number could number neighbors vertex technical side remark emphasize order achieve almost kernelization algorithm often need use suitable data structures carefully design appropriate data reduction rules exhaustively applicable linear time making form algorithm engineering much relevant classical setting mere data reduction rules notation observations use standard notation graph theory particular paths consider simple paths two paths graph called internally either completely overlap endpoints matching graph set pairwise disjoint edges let graph let matching degree vertex denoted deg vertex called matched respect edge containing otherwise called free respect matching clear context omit respect alternating path respect path every second edge path augmenting path alternating path whose endpoints free well known matching maximum augmenting path let two matchings denote graph containing edges symmetric difference observe every vertex degree two max matching denote maximum matching max largest possible overlap number edges maximum matching max maximum matching holds max observe maximum matching furthermore observe max consists paths isolated vertices max paths augmenting path moreover paths short possible max observation path holds every proof assume shorter path also augmenting path corresponding maximum matching max max satisfies contradiction definition observation let graph maximum matching let vertex subset size let maximum matching kernelization parameterized problem set instances finite alphabet parameter say two instances parameterized problems equivalent kernelization algorithm given instance parameterized problem computes polynomial time equivalent instance kernel computable function say measures size kernel say admits polynomial kernel often kernel achieved applying executable data reduction rules call data reduction rule correct new instance results applying equivalent instance called reduced respect data reduction rule application rule effect instance kernelization matching general graphs section investigate possibility efficient effective preprocessing matching first present section simple kernel matching respect parameter feedback edge set exploiting data reduction rules ideas used kernel present section main result section kernel smaller parameter feedback vertex number parameter feedback edge number provide computable kernel matching parameterized feedback edge number size minimum feedback edge set observe minimum feedback edge set computed linear time via simple search search kernel based next two simple data reduction rules due karp sipser deal vertices degree two reduction rule let deg delete deg delete neighbor decrease solution size one matched neighbor reduction rule let vertex degree two let neighbors remove merge decrease solution size one correctness stated karp sipser completeness give proof lemma reduction rules correct proof degree zero clearly matching remove degree one let single neighbor let maximum matching size least matched since otherwise adding edge would increase size matching thus maximum matching size least conversely maximum matching size easily extended edge maximum matching size degree two let two neighbors let maximum matching size least matched matched since otherwise adding edge resp would increase size matching thus deleting merging decreases size one looses either edge incident one edges incident hence resulting graph maximum matching size least conversely let matching size least merged vertex free matching size otherwise matched vertex matching either least one two vertices neighbor matching vertex yields matching size although reduction rules correct clear whether reduction rule exhaustively applied linear time however purpose suffices consider following restricted version exhaustively apply linear time reduction rule let vertex degree two neighbors degree two remove merge decrease one lemma reduction rules exhaustively applied time proof give algorithm exhaustively applies reduction rules linear time first using bucket sort sort vertices degree keep three lists containing vertices one applies reduction rules straightforward way neighbor vertex deleted check vertex degree zero one two yes add vertex corresponding list next show algorithm runs linear time first observe deletion vertex done constant time vertices affected second consider vertex neighbor observe deleting done deg time since one needs update degrees neighbors furthermore decreasing one done constant time deleted vertex finally consider vertex two neighbors degree two deleting takes constant time merge iterate neighbors add neighborhood neighbor already neighbor decrease degree one relabel new contracted vertex overall running time apply reduction rules exhaustively deg theorem matching admits computable kernel respect parameter feedback edge number proof apply reduction rules exhaustively linear time see lemma claim reduced graph less vertices edges denote feedback edge set furthermore denote vertices degree one two two thus leaf incident edge next since forest tree thus finally vertex needs least one neighbor degree least three since reduced respect reduction rule thus vertices either incident edge adjacent one vertices degree least three since sum degrees vertices follows thus number vertices since forest follows edges applying algorithm matching kernel yields corollary matching solved time feedback vertex number parameter feedback vertex number next provide matching kernel size computable time feedback vertex number using known factor algorithm approximate feedback vertex set use kernelization algorithm roughly speaking kernelization algorithm extends computable kernel respect parameter feedback edge set thus reduction rules play important role kernelization compared kernels presented paper kernel presented comes price higher running time bigger kernel size exponential size remains open whether matching parameterized feedback vertex number admits computable kernel possibly exponential size whether admits polynomial kernel computable time subsequently describe kernelization algorithm keeps kernel vertices given feedback vertex set shrinks size need notation section assume tree rooted arbitrary fixed vertex refer parent children vertex leaf called bottommost leaf either siblings siblings also leaves bottommost refers subtree root parent considered leaf outline algorithm follows assume throughout log since otherwise input instance already kernel size reduce wrt reduction rules compute maximum matching modify linear time leaves free section bound number free leaves section bound number bottommost leaves section bound degree vertex use reduction rules provide kernel size section whenever reduce graph step also show reduction correct given instance reduced one correctness kernelization algorithm follows correctness step discuss following details step steps lemma perform step linear time lemma step correct maximum matching step computed repeatedly matching free leaf neighbor removing vertices graph thus effectively applying reduction rule lemma done linear time step done time traversing tree bfs manner starting root visited inner vertex free observe children matched since maximum pick arbitrary child match vertex previously matched free since child visited future observe steps change graph auxiliary matching thus steps correct step recall goal number edges vertices since use simple analysis parameter feedback edge set furthermore recall observation size maximum matching plus size crucial observation vertex least neighbors free wrt exists maximum matching matched one vertices since blocked matching edges means delete edges incident formalizing idea obtain following reduction rule reduction rule let graph let subset size let maximum matching vertex least free neighbors delete edges vertices lemma reduction rule correct exhaustively applied time proof first discuss correctness running time denote size maximum matching input graph size maximum matching new graph edges incident deleted need show since matching also matching easily obtain remains max show end let maximum matching maximum overlap see free wrt matched vertex also neighbor also matching thus case hence consider remaining case matched vertex edge deleted reduction rule hence neighbors neighbors free wrt none edges deleted observe choice graph graph vertex set edges either see contains exactly paths consider isolated vertices paths paths augmenting path observation observe edge one augmenting paths denote path thus paths contain also paths contains exactly two vertices free wrt endpoints path means vertex inner vertex path furthermore since maximum matching follows path one two endpoints hence vertices contained paths except therefore one vertices say free wrt matched thus reversing augmentation along adding edge obtain another matching size observe matching thus completes proof correctness come running time exhaustively apply data reduction rule follows first initialize vertex counter zero second iterate free vertices arbitrary order free vertex iterate neighbors neighbor following counter less increase counter one mark edge initially edges unmarked third iterate vertices counter currently considered vertex delete unmarked edges incident completes algorithm clearly deletes edges incident vertex free neighbors edges neighbors kept running time iterating free vertices consider edge furthermore iterating vertices consider edge finish step exhaustively apply reduction rule linear time afterwards free wrt leaves least one neighbor since vertices adjacent free leaves thus applying reduction rule remove remaining free leaves neighbor however since vertex also neighbor removed might create new free leaves need apply reduction rule update matching see step process alternating application reduction rules stops rounds since neighborhood vertex changed reduction rule shows running time next show improve arrive final lemma subsection algorithm reduce input matching instance feedback vertex set output equivalent matching instance also feedback vertex set maximum matching leaves free reduce wrt reduction rules compute maximum matching described step foreach store number free neighbors foreach marked false stack containing free leaves empty pop foreach check whether reduction rule applicable marked true fix free neighbor enough free neighbors apply reduction rule foreach delete foreach marked false delete next deal case becomes vertex degg free push degg matched delete neighbor degg neighbors neighbor matched neighbor delete update leaf add free add list vertices check else arbitrary alternating path leaf subtree rooted augment along ensure free vertices leaves push free add list vertices check return lemma given matching instance feedback vertex set algorithm computes linear time instance feedback vertex set maximum matching following holds matching size matching size vertex free wrt leaf free leaves proof following explain algorithm reduces graph respect reduction rules updates matching described step algorithm performs lines steps described previous section done linear time next reduction rule applied lines using approach described proof lemma vertex counter maintained iterating free leaves counters updated counter reaches algorithm knows fixed free neighbors according reduction rule edges vertices deleted see line observe counter reaches vertex never considered algorithm since remaining neighbors free leaves already popped stack difference description proof lemma algorithm reacts degree vertex decreased one see lines matched simply remove matched neighbor otherwise add list unmatched vertices defer dealing latter stage algorithm observe matching still satisfies property free vertex leaf since matched vertex pairs deleted far deleting unmatched vertices respective neighbor maximum matching needs updated satisfy property algorithm lines let entry degree one line free leaf neighbors following reduction rule delete neighbor decrease solution size one see lines let denote previously matched neighbor since removed free leaf simply add way deal later leaf need update since leaves allowed free end take arbitrary alternating path leaf subtree root augment along see lines done follows pick arbitrary child let matched neighbor since parent follows child remove add leaf alternating path found augmented otherwise repeat procedure taking role completes algorithm correctness follows fact deletes edges vertices according reduction rules remains show running time end prove algorithm considers edge two times first consider edges incident vertex edges inspected twice algorithm marked see line second time deleted bounds running time first part lines consider remaining edges within end observe algorithm performs two actions edges deleting edges line finding augmenting along alternating path lines clearly deleting edge longer considered remains show edge part one alternating path used lines assume toward contradiction algorithm augments along edge twice edges augmented twice let one closest root tree contained edge closer root let endpoints first augmenting path containing endpoints second augmenting path containing observe augmenting path chosen line holds one endpoint leaf endpoint ancestor leaf assume without loss generality leaves respective ancestors let vertices deleted line turn made free observe contain four vertices since augmenting vertices deleted since contained paths either ancestor vice versa case happen since second augmenting path endpoint would matched contradiction see line next consider case ancestor case handled subsequently denote neighbor observe since chosen closest root next distinguish two cases whether initially matched initially free matched augmenting along choice changed augmentation along however contradiction since augmenting along happens matched edge deleted since matched time deleted means would matched two vertices thus consider case initially matched augmenting along free matched parent consequence matched neither augmentation since algorithm augments along deleted matched follows edge augmented algorithm augments along denote augmenting path containing edge since apparently free leaf follows needs contain matched neighbor means edge augmented least twice however closer root contradiction choice completes case ancestor consider remaining case ancestor case neighbor observe child furthermore observe augmentation along leaf free reached alternating path hence augmentation along holds reach exactly one free leaf via alternating path observe true even algorithm removes since new free leaf created thus deleting right augmentation along augmenting path free leaf reachable contradiction fact matching maximum conclude edge augmented thus algorithm considers edge twice augmenting deleting hence algorithm runs linear time summarizing step apply algorithm order obtain instance free vertices leaves lemma done linear time furthermore lemma also shows step correct step step reduce graph time bottommost leaves remain forest restrict consider leaves matched parent vertex sibling call bottommost leaves interesting sibling bottommost leaf definition also leaf thus one leaves bottommost leaf siblings matched respect leaves free recall previous step number free leaves respect hence bottommost leaves interesting general strategy step extend idea behind reduction rule want keep pair vertices different internally augmenting paths ease notation keep paths although keeping sufficient step consider augmenting paths form bottommost leaf parent assume parent adjacent vertex observe case augmenting path starting two vertices continue end neighbor thus edge used augmenting paths length three furthermore augmenting paths clearly internally need edge kept augmenting paths already delete furthermore deleted last edge neighbors beginning vertex removed applying reduction rule child leaf follows neighbors show lemma application reduction rule remove takes time remove vertices time spent reduction rule step show simple preprocessing one application reduction rule algorithm indeed performed time lemma let leaf tree parent let parent degree two applying reduction rule deleting contracting setting done time plus time initial preprocessing algorithm step input matching instance feedback vertex set size log maximum matching free vertices leaves output equivalent matching instance also feedback vertex set tree bottommost leaves maximum matching free vertices leaves fix arbitrary bijection foreach set number read constant time initialize table tab size tab list containing parents interesting bottommost leaves empty pop child vertex foreach tab tab tab else delete degree two apply reduction rule decreases one vertex resulting merge parent interesting bottommost leaf add parent return proof preprocessing simply create partial adjacency matrix vertices one dimension dimension adjacency matrix size clearly computed time apply reduction rule deleting takes constant time merge iterate neighbors neighbor already neighbor decrease degree one otherwise add neighborhood relabel new merged vertex since leaf neighbor namely deleted follows remaining neighbors thus using adjacency matrix one check constant time whether neighbor hence algorithm runs deg time ideas used algorithm use step step algorithm explained proof following lemma stating correctness running time algorithm lemma let matching instance let feedback vertex set let maximum matching free vertices leaves algorithm computes time instance feedback vertex set maximum matching following holds matching size matching size bottommost leaves free vertices leaves proof start describing basic idea algorithm end let edge interesting bottommost leaf without siblings matched parent counting pair one augmenting path gives simple analysis time per edge slow purposes instead count pair consisting vertex set one augmenting path way know one augmenting path without iterating comes price considering pairs however show computations time per considered edge main reason improved running time simple preprocessing allows bottommost vertex determine constant time preprocessing follows see lines first fix arbitrary bijection set subsets numbers done example representing set binary string number ith position given set number computed time straightforward way thus lines performed time furthermore since assume log otherwise input instance already exponential kernel thus reading comparing numbers done constant time furthermore line algorithm precomputes vertex number corresponding neighborhood preprocessing algorithm uses table tab counts augmenting path vertex set whenever bottommost leaf exactly neighborhood parent adjacent see lines time algorithm proceeds follows first computes line set contains parents interesting bottommost leaves clearly done linear time next algorithm processes vertices observe vertices might added see line processing let currently processed vertex let child vertex let neighborhood neighbor algorithm checks whether already augmenting paths table lookup tab see line table entry incremented one see line since provide another augmenting path yes edge deleted line show change maximum matching size degree two processing neighbors applying reduction rule remove contract two neighbors follows lemma application reduction rule done time hence algorithm runs time recall vertices free wrt leaves thus changes applying reduction rule line follows first edge removed second edge replaced hence matching running algorithm still free vertices leaves remains prove deletion edge line results equivalent instance resulting instance bottommost leaves first show end assume towards contradiction new graph smaller maximum matching clearly larger maximum matching thus maximum matching contain edge implies child matched one neighbors except free wrt deleting adding yields another maximum matching containing contradiction recall since leaf thus maximum matching contains edge observe algorithm deletes least interesting bottommost leaves respective parent adjacent see lines since follows pigeon hole principle least one vertices say matched vertex thus since interesting bottommost leaf matched remaining neighbor parent implies another maximum matching contradiction assumption maximum matchings contain next show resulting instance bottommost leaves end recall bottommost leaves interesting see discussion beginning subsection hence remains number interesting bottommost leaves observe parent interesting bottommost leaf adjacent vertex since otherwise would deleted line furthermore running algorithm vertex adjacent parents interesting bottommost leaves see lines thus number interesting bottommost leaves therefore number bottommost leaves step subsection provide final step kernelization algorithm recall previous steps number bottommost leaves computed maximum matching vertices free wrt free vertices leaves using next show reduce graph size end need notation leaf bottommost called pendant define tree forest tree forest obtained removing pendants next observation shows much larger allows restrict following giving upper bound size observation let described vertex set let tree forest vertex set proof observe union pendants thus suffices show contains pendants end recall maximum matching free leaves thus leaves sibling also leaf since two leaves parent one matched hence pendants pairwise different parent vertices since parent vertices follows number pendants use following observation provide upper bound number leaves observation let forest let forest let set bottommost leaves set leaves exactly proof first observe bottommost leaf leaf since remove vertices obtain thus remains show leaf bottommost leaf distinguish two cases whether leaf first assume leaf thus child vertices removed since remove pendants obtain since pendant leaf follows parent one leaves thus definition leaves bottommost leaves contradiction fact deleted creating second assume leaf bottommost leaf done thus assume bottommost leaf therefore pendant however since remove pendants obtain follows contained contradiction observation follows set bottommost leaves exactly set leaves previous step reduced graph thus vertices degree one since tree forest also vertices degree least three let vertices degree two figure situation proof lemma augmenting path intersects two augmenting paths pwx pwy respectively bold edges indicate edges matching dashed edges indicate alternating paths starting first last edge matching gray paths background highlight different augmenting paths initial paths well new paths postulated lemma let remaining vertices follows hence remains bound size end degree vertex use reduction rules check edge whether need check use idea previous subsection vertex needs reach subset times via augmenting path similarly previous section want keep enough augmenting paths however time augmenting paths might long different augmenting paths might overlap still use basic approach use following lemma stating still somehow replace augmenting paths lemma let maximum matching forest let puv augmenting path let pwx pwy pwz three internally augmenting paths respectively puv intersects exist two augmenting paths endpoints one three vertices proof label vertices puv alternating odd even respect puv two consecutive vertices label odd even analogously label vertices pwx pwy pwz odd even respect pwx pwy pwz respectively always odd since paths augmenting follows edge even vertex succeeding odd vertex matching edge odd vertex succeeding even vertex matching observe puv intersects paths least two consecutive vertices since every second edge must edge since forest vertices free respect follows intersection two augmenting paths connected thus path since puv intersects three augmenting paths follows least two paths say pwx pwy fitting parity intersections puv pwx pwy even vertices respect puv either even odd respect pwx pwy assume without loss generality intersections paths vertices label respect three paths labels differ revert ordering vertices puv exchange names change labels puv opposite denote first last vertex intersection puv pwx analogously denote first last vertex intersection puv pwy assume without loss generality puv intersects first pwx pwx observe even vertices odd vertices since intersections start end edges see fig illustration arbitrary path two arbitrary vertices denote subpath observe puv pwx pwy puv two augmenting algorithm algorithm computing step kernel wrt parameter feedback vertex number input matching instance feedback vertex set size log bottommost leaves maximum matching free vertices leaves output equivalent matching instance contains vertices edges fix arbitrary bijection foreach set number read constant time initialize table tab size tab tree forest vertices degree foreach foreach false needed augmenting path delete exhaustively apply reduction rules return function free wrt return true matched neighbor adjacent free leaf return true least one neighbor tab tab tab return true foreach neighbor matched wrt fulfills true return true return false paths algorithm description provide algorithm step see algorithm pseudocode algorithm uses preprocessing see lines algorithm thus algorithm determine whether two vertices neighborhood constant time algorithm algorithm uses table tab entry vertex set table filled way algorithm detected least tab internally augmenting paths main part algorithm boolean function lines makes decision whether delete edge function works follows edge starting graph explored along possible augmenting paths reason keeping edge found exploration possible vertex free wrt augmenting path keep see line observe step number free vertices vertices leaves thus keep bounded number edges incident corresponding augmenting paths end free leaf provide exact bound discussing size graph returned algorithm line algorithm stops exploring graph keeps edge degree least three reason keep graph exploration simple following paths ensures running time exploring graph exceed since number vertices degree least three bounded see discussion observation follows bounded number edges kept free wrt matched vertex adjacent leaf free wrt path augmenting path thus algorithm keeps case edge see line since number free leaves bounded bounded number edges incident kept degree least three algorithm stops graph exploration keeps edge see line keep running time overall let denote neighborhood thus partial augmenting path extended vertex thus algorithm yet find paths vertices whose neighborhood also table entry tab encodes set increased one edge kept see lines need paths since paths might long intersect many augmenting paths see proof lemma details enough algorithm already found augmenting paths neighborhood irrelevant algorithm continues line discussed cases keep edge apply algorithm extends partial augmenting part considering neighbors except since algorithm dealt possible extensions vertices lines extensions free vertices line follows next vertex path vertex matched wrt furthermore since want extend partial augmenting path require adjacent otherwise would another shorter partial augmenting path need currently stored partial augmenting path statements algorithm edge denote induced subgraph vertices explored function keepedge called line precisely initialize whenever algorithm reaches line add furthermore whenever algorithm reaches line add next show path path one additional pendant lemma let two vertices either path tree exactly one vertex two neighbors furthermore degree exactly three neighbor proof first show vertices except neighbor degree two observe vertices requires algorithm reach line let currently last vertex algorithm continues graph exploration line observe algorithm therefore dealt case degree least three line thus either pendant leaf degree two first case candidate continue graph exploration stops second case degree two next show candidate continuing graph exploration line leaf assume toward contradiction leaf since parent matched vertex chosen see line follows matched implies function would returned true line would reached line contradiction thus graph exploration follows vertices furthermore argumentation implies adjacent leaf unless leaf predecessor graph exploration two cases either adjacent leaf leaf matched neighbor first case one neighbor since hence degree two second case two neighbors thus degree three set union induced subgraphs wrt lemma exists partition pxa pxb graphs within pxa within pxb pairwise disjoint proof since tree forest also bipartite let two color classes define two parts pxa pxb follows subgraph pxa neighbor contained otherwise pxb show subgraphs pxa pxb pairwise end assume toward contradiction two graphs pxa share vertex case pxb completely analogous let first vertex respectively adjacent observe let first vertex lemma paths trees one vertex degree two vertex degree three neighbor respectively implies together either assume without loss generality since vertex follows algorithm followed graph exploration line however contradiction since algorithm checks line whether new vertex path adjacent thus subgraphs pxa pxb pairwise next show tab recall maps number see line exist least internally augmenting paths lemma line algorithm holds tab exist wrt least alternating paths vertices paths pairwise except proof note time tab increased one see line algorithm found vertex alternating path furthermore since function returns true case edge neighbor deleted line thus exist least alternating paths vertices whose neighborhood exactly lemma follows least half paths next lemma shows algorithm correct runs time lemma let matching instance let feedback vertex set size log bottommost leaves let maximum matching free vertices leaves algorithm computes time equivalent instance size proof split proof three claims one correctness algorithm one returned kernel size one running time claim input instance instance produced algorithm proof observe algorithm changes input graph two lines lines lemma applying reduction rules yields equivalent instance thus remains show deleting edges line correct change size maximum matching end observe deleting edges increase size maximum matching thus need show size maximum matching decrease assume toward contradiction let edge whose deletion decreased maximum matching size redefine graph deletion graph deletion recall algorithm gets additional input maximum matching let max maximum matching largest possible overlap let since free wrt follows path one endpoint recall since path follows augmenting path since vertices free wrt follows vertices except endpoints let second endpoint path call vertex even odd vertex even odd distance even vertex odd vertices observe odd vertex adjacent since otherwise would another augmenting path uses vertices implying existence another maximum matching use contradiction let neighbor since odd vertex except adjacent follows graph exploration function starting line either reached returned true cases function would returned true line algorithm would deleted contradiction thus assume therefore function considered vertex line keep edge thus considering holds tab encodes lemma follows pairwise except alternating paths vertices thus internally vertexdisjoint paths one paths intersect path reverting augmentation along augmenting along would results another maximum matching containing contradiction thus assume path intersects least one path two paths intersected path holds path intersect one assume toward contradiction since path except contains follows intersections paths within since internally follows cycle contradiction fact feedback vertex set since follows pigeon hole principle path intersects least three paths path intersects apply lemma obtain two augmenting paths thus reverting augmentation along augment along yields another maximum matching contain contradiction claim graph returned algorithm vertices edges proof first show vertex degree end need count number neighbors function returns true line lemma function explores graph along one two paths essentially growing one starting point two directions recall denotes subgraphs induced graph exploration neighbors lemma partition pxa pxb within part subgraphs pairwise consider two parts independently start bounding number graphs pxa function returned true analysis completely analog pxb since explored subgraphs disjoint free vertices wrt leaves follows algorithm returned times true line due adjacent free leaf also algorithm returns times true line due free furthmore algorithm returns times true line finally show algorithm returns times true lines respectively follows discussion observation tree leaves denoted vertices degree least three denoted let vertices since tree forest vertices edges hence degt implies degt thus algorithm returns times true line due vertex also algorithm returns times true line due vertex summarizing considering graph explorations pxa algorithm returned times true function analogously considering graph explorations pxa algorithm also returned times true hence vertex degree show exhaustive application reduction rules indeed results kernel claimed size end denote vertices degree one two least three since vertex degree reduced wrt reduction rule next since forest tree finally thus vertex needs least one neighbor degree least three since reduced respect reduction rule thus vertex either incident vertex adjacent one vertices degree least three thus summarizing contains vertices edges claim algorithm runs time proof first observe lines done time preprocessing table initialization done time discussed section furthermore clearly computed time second lemma applying reduction rules done time thus remains show iteration line done time lemma explored graphs partitioned two parts within part subgraphs thus vertex visited twice execution function furthermore observe lines table accessed constant time thus function checks whether vertex neighbor namely line single check done constant time since rest computation done less edges follows iteration line indeed done time completes proof lemma kernelization algorithm parameter feedback vertex number essentially calls steps theorem matching parameterized feedback vertex number admits kernel size computed time proof first using approximation compute approximate feedback vertex set apply steps applying first three steps rather straightforward see section remaining three steps use algorithms lemmas done time results kernel size figure chain graph note ordering vertices going left right ordering vertices going right left reason two orderings drawn different directions maximum matching drawn parallel edges see bold edges fact algorithm computes matchings matched edges parallel applying algorithm matching kernel yields corollary matching solved time feedback vertex number kernelization matching bipartite graphs section investigate possibility efficient effective preprocessing bipartite matching particular show computable kernel respect parameter distance chain graphs first part section provide definition chain graphs describe compute parameter second part discuss kernelization algorithm definition computation parameter first define chain graphs subclass bipartite graphs special monotonicity properties definition let bipartite graph chain graph two color classes admits linear order neighborhood inclusion whenever observe graph contains twins one linear order neighborhood inclusion avoid ambiguities fix vertices color class resp chain graph one linear order resp two vertices resp resp remainder section consider bipartite representation given chain graph vertices resp ordered according resp left right resp right left illustrated figure simplicity notation use following denote orderings whenever color class clear context next show approximate parameter corresponding vertex subset linear time end use following characterization chain graphs lemma bipartite graph chain graph contain induced lemma approximation problem deleting minimum number vertices bipartite graph order obtain chain graph proof let bipartite graph compute set chain graph four times larger minimum size set algorithm iteratively tries find deletes four corresponding vertices algorithm algorithm computes maximum matching chain graph edges parallel see fig visualization input chain graph output maximum matching matched edges parallel compute size maximum matching using algorithm steiner yeomans return found since lemma least one vertex needs removed algorithm yields claimed approximation details algorithm follows first initializes sorts vertices degree vertices increasing order vertices decreasing order deg deg deg deg since degree vertex max done linear time bucket sort stage algorithm deletes vertices degree zero vertices adjacent vertices partition deleted vertices added since vertices participate next algorithm recursively processes vertices nondecreasing order degrees let vertex let neighbor since adjacent vertices otherwise would deleted vertex adjacent since deg deg follows neighbor adjacent hence four vertices induce two edges thus form thus algorithm adds four vertices deletes graph continues vertex minimum degree running time show initial sorting algorithm considers edge twice selecting described done time select algorithm simply iterates vertices finds vertex adjacent way deg vertices considered similarly iterating neighbors one finds hence edges incident used find vertices second time vertices deleted thus using appropriate data structures algorithm runs time kernelization rest section provide computable kernel bipartite matching respect parameter vertex deletion distance chain graphs intuitive description kernelization follows first upper bound number neighbors vertex deletion set mark special vertices use monotonicity properties chain graphs upper bound number vertices lie two consecutive marked edges thus bounding total size reduced graph vertices let bipartite input graph let vertex subset chain graph lemma compute approximate linear time kernelization algorithm follows first compute specific maximum matching algorithm edges parallel matched vertices consecutive ordering see also fig since convex graphs matching solvable convex graphs super class chain graphs done time use kernelization algorithm obtain local information possible augmenting paths example augmenting path least one endpoint forming data reduction rule denoting size maximum matching yields following reduction rule return trivial return trivial correctness reduction rule follows observation without loss generality able assume vertex either matched another vertex vertex means augmenting path starting vertex enter chain graph vertex formalize concept vertex define nsmall set neighbors smallest degree formally nsmall lemma let bipartite graph let vertex set chain graph exists maximum matching every matched vertex matched vertex nsmall proof assume towards contradiction matching let maximum matching maximizes number vertices matched vertex nsmall let maximize nsmall let vertex matched vertex nsmall matched vertex nsmall unmatched vertex nsmall matching maximum matching vertices compared matched vertex nsmall contradiction hence assume free vertex nsmall since follows least one vertex nsmall matched vertex observe definition nsmall thus thus maximum matching vertices compared fulfilling condition lemma contradiction based lemma provide next rule reduction rule let instance reduced respect reduction rule let delete edges nsmall clearly reduction rule exhaustively applied time one iteration order idea kernel follows keep kernel vertices vertex keep vertices nsmall kept vertex matched keep also vertex matched denote set vertices kept far consider augmenting path vertex vertex observe also thus augmenting path furthermore vertices augmenting path subset thus keeping vertices edges also keep augmenting path kernel hence remains consider complicated case end next show certain areas chain graph number augmenting paths passing area bounded definition let chain graph let matching furthermore let lmv resp rmv number neighbors resp left resp right formally lmv rmv definition terms left right refer ordering vertices bipartite representation illustrated figure abbreviation rmv lmv stands number vertices right left matched vertex set lmv lmv rmv rmv finally define rmv rmv lmv lmv lemma let chain graph maximum matching computed algorithm let number alternating paths start end edges endpoints left right min lmv rmv proof prove case lmv rmv min lmv rmv lmv case lmv rmv follows symmetry switched roles let aug denote number alternating paths first last edge furthermore let lmv neighbors left lmv since chain graph follows vertex adjacent vertex furthermore edge follows construction see algorithm hence alternating paths contain least one vertex lmv since alternating paths follows aug lmv previous lemma directly obtain following lemma let chain graph let maximum matching computed algorithm let lmv alternating paths start end edges endpoints left right lemma states number augmenting paths passing area bounded using want replace area gadget vertices end need notation kept vertex may also keep vertices right left call vertices left buffer right buffer definition let chain graph let maximum matching computed algorithm let lmv vertices right form right buffer formally min lmv analogously min lmv min lmv min lmv note definition sets depends four vertices omit dependencies names sake presentation reduction rule let instance reduced respect reduction rule let size least min lmv delete vertices matched neighbors add edges vertices right buffer vertices left buffer decrease number removed matched vertex pairs lemma reduction rule correct exhaustively applied time proof first introduce notation provide general observations let stated reduction rule denote resp set vertices resp denote sets deleted tices note since produced algorithm denote vertices buffers ymin lmv since input instance reduced respect reduction rule follows denote matching obtained deleting edges reduced graph recall reduced number matched edges removed next show claims input instance produced instance present two claims observe perfect matching vertices thus claim max max proof recall minimizes size max since holds brevity set max max note graph contains paths show many augmenting paths paths show contains matching size max end observe paths use vertices also contained thus consider paths use vertices denote set paths using vertices set consider arbitrary let pim denote vpi vertices pim corresponding order observe vpi endpoints pim exactly one endpoint endpoint since pim path assume without loss generality vpi thus vertices pim odd even index next show two vertices vji pim follows vji first observe vji free vertex wrt since matched wrt since computed algorithm follows thus assume thus assume toward contradiction vji since since chain graph follows contradiction observation thus vji next show path pim contains least one vertex vji left least one vertex right recall computed algorithm thus free vertices smallest wrt ordering see also fig thus one endpoint pim vertex either left vpi right thus assume endpoints pim showed previous paragraph vpi thus also vpi vpi since computed algorithm since assumed vertices pim follows least one vertex holds furthermore since assumption vertex follows since vpi since vpi denote api last vertex path pim right api vertex pim vertex pim holds api follows previous paragraph api exists analogously api denote bpi first vertex path pim left bpi vertex pim vertex pim holds bpi means alternating path api bpi starting ending edges paths pairwise show also pairwise alternating paths api bpi assume without loss generality apt since path pim successor api right follows api least neighbors right since right buffer contains lmv see lemma vertices right api bri symmetry bpi recall forms perfect matching well since reduction rule added edges follows path pim completed follows api bri ari bpi note exactly edges bri ari thus path replaced augmenting path augmenting paths thus many augmenting paths paths therefore claim proof let maximum matching observe construct matching follows first copy edges second add edges perfect matching added observe edges also matching size thus assume edges arit lmv observe clearly vertices arit brjt free respect show pairwise augmenting paths arit note however paths necessarily arir end recall definition lmv vertex least lmv neighbors left matched neighbor allows iteratively find augmenting paths follows create augmenting path start vertex denote last vertex added beginning add neighbor matched following adjacent vertex arit add otherwise add leftmost neighbor repeat process contains vertex arit found remove vertices continue observe two vertices least lmv vertices ordering vertices see figure thus finite number steps reach vertex arit furthermore follows removing vertices holds lmv decreased exactly one contains vertex one vertex among lmv neighbors directly left matched neighbor thus iteration lmv follows procedure constructs augmenting paths arit hence contains matching size thus correctness data reduction rule follows previous two claims remains prove running time end observe matching given computing degrees done time also lmv computed linear time vertex one check neighbor whether left matched neighbor adjust lmv accordingly furthermore lmv computing removing vertices done deg deg time thus reduction rule exhaustively applied time next number free vertices respect let afree free respect akfree afree afree akfree contains rightmost free vertices observe vertices akfree left analogously denote bfree set containing leftmost free vertices reduction rule reduction rule let instance reduced respect delete vertices afree afree bfree bfree lemma reduction rule correct applied time proof running time clear remains show correctness let input instance reduced respect reduction rule let instance produced reduction rule show deleting vertices afree akfree yields equivalent instance follows symmetry deleting vertices bfree bfree yields also equivalent instance first show also produced instance yesinstance let maximum matching clearly observe removed vertex afree holds every vertex akfree right thus since reduced respect reduction rule follows thus exist augmenting paths none augmenting paths ends vertex afree akfree augmenting paths exist also thus one augmenting paths say ends least one vertex akfree endpoint augmenting paths since follows lemma neighbor indeed since follows thus replace augmenting path exhaustively applying exchange argument follows assume none augmenting paths uses vertex afree akfree thus augmenting paths also contained hence resulting instance still finally observe also since subgraph thus matching size also matching hence theorem matching bipartite graphs admits kernel respect vertex deletion distance chain graphs kernel computed linear time proof let input instance two partitions chain graph given explicitly use approximation provided lemma compute kernelization follows first compute linear time algorithm next compute set apply reduction rules lemmas done linear time let leftmost vertex rightmost vertex let matched neighbors since reduced instance respect reduction rule follows number vertices well number vertices respectively furthermore free vertices left since reduced instance respect reduction rule remains number matched vertices left right observe vertices left matched respect vertices left following add four vertices idea edge vertex left means add vertices simulate situation leftmost vertex also ensure matched vertices add make respectively sole neighbors way ensure maximum matching new graph exactly two edges larger maximum matching old graph new graph apply reduction rule reduce number vertices formally add following edges add add edges vertices let rightmost vertex akfree add edges set set vertex afree set matched vertex furthermore add add finally increase two next apply reduction rule linear time remove reduce two procedure follows vertices left vertices right rightmost vertex use procedure thus total number vertices remaining graph furthermore observe adding removing four vertices well applying reduction rule done linear time thus overall running time kernelization applying algorithm bipartite matching kernel yields corollary matching solved time vertex deletion distance chain graphs conclusion focussed kernelization results matching remain numerous challenges future research discussed end concluding section first however let discuss closely connected issue fptp algorithms matching generic augmenting pathbased approach provide fptp algorithms matching begin note one find augmenting path linear time solving algorithm matching parameterized vertex deletion distance works follows use approximation algorithm compute vertex set trivial graph matching solvable compute linear time initial maximum matching start matching increase size times obtain time maximum matching directly derive matching solved time one following parameters feedback vertex number feedback edge number vertex cover number bipartite matching solved time vertex deletion distance chain graphs using kernelization results multiplicative dependence running time parameter made additive one instance way running time bipartite matching parameterized vertex deletion distance chain graphs improves conclude listing questions tasks future research size running time kernel respect feedback vertex set see section improved linear time computable kernel matching parameterized treewidth assuming given would complement recent randomized log time algorithm one extend kernel section bipartite matching matching parameterized distance chain graphs three concrete questions numerous including parameterizations form kernel lower bound results finally matching become drosophila fptp studies akin vertex cover classical fpt studies hope gave reasons believing references geiger naor roth approximation algorithms feedback vertex set problem applications constraint satisfaction bayesian inference siam journal computing blum new approach maximum matching general graphs proceedings international colloquium automata languages programming icalp volume lncs pages springer bodlaender jansen kratsch preprocessing treewidth combinatorial analysis kernelization siam journal discrete mathematics bodlaender jansen kratsch kernelization lower bounds crosscomposition siam journal discrete mathematics spinrad graph classes survey volume siam monographs discrete mathematics applications siam chang algorithms maximum matching minimum chordal bipartite graphs proceedings international symposium algorithms computation isaac volume lncs pages springer dahlhaus karpinski matching multidimensional matching chordal strongly chordal graphs discrete applied mathematics duan pettie approximation maximum weight matching journal acm fomin lokshtanov pilipczuk saurabh wrochna fully polynomialtime parameterized computations graphs matrices low treewidth proceedings annual symposium discrete algorithms soda pages siam gabow tarjan algorithm special case disjoint set union journal computer system sciences gabow tarjan faster scaling algorithms general problems journal acm giannopoulou mertzios niedermeier polynomial algorithms case study longest path interval graphs proceedings international symposium parameterized exact computation ipec volume lipics pages schloss dagstuhl fuer informatik guo niedermeier structural view parameterizing problems distance triviality proceedings international workshop parameterized exact computation iwpec volume lncs pages springer gupta peng fully dynamic matchings proceedings annual ieee symposium foundations computer science focs pages ieee computer society hopcroft karp algorithm maximum matchings bipartite graphs siam journal computing karp sipser maximum matchings sparse random graphs proceedings annual ieee symposium foundations computer science focs pages ieee computer society micali vazirani algorithm finding maximum matching general graphs proceedings annual ieee symposium foundations computer science focs pages ieee skiena algorithm design manual springer steiner yeomans linear time algorithm maximum matchings convex bipartite graphs comput math yuster maximum matching regular almost regular graphs algorithmica
8
provable quantum state tomography via methods anastasios amir dohuyng srinadh constantine sujay nov ibm watson research center university maryland facebook toyota technological institute chicago university texas austin nowadays steadily growing quantum processors required develop new quantum tomography tools tailored systems work describe computational tool based recent ideas optimization algorithm excels setting data points measured lowrank quantum state system show algorithm practically used quantum tomography problems beyond reach convex solvers moreover faster approaches crucially prove despite program mild conditions algorithm guaranteed converge global minimum problem thus constitutes provable quantum state tomography protocol introduction like processor behavior quantum information processor must characterized verified certified quantum state tomography qst one main tools purpose yet generally inefficient procedure since number parameters specify quantum states grows exponentially number inefficiency two practical manifestations without prior information vast number data points needs collected data gathered numerical procedure executed dimensional space order infer quantum state consistent observations thus perform qst nowadays steadily growing quantum processors must introduce novel efficient techniques completion since often aim quantum information processing coherently manipulate pure quantum states states equivalently described positive psd density matrices use prior information modus operandi towards making qst manageable respect amount data required compressed sensing extension guaranteed approximation applied qst within context particular proven convex programming guarantees robust estimation pure author correspondence addressed amirk dhpark srinadh constantine sanghavi states much less information common wisdom dictates overwhelming probability advances however leave open question efficiently one estimate exponentially largesized quantum states limited set observations since convex programming susceptible provable performance typical qst protocols rely convex programs nevertheless achilles heel remains high computational storage complexity particular due psd nature density matrices key step repetitive application hermitian eigenproblem solvers solvers include family lanczos methods svd type methods well preconditioned hybrid schemes among others see also recent article complete overview since least per full eigenvalue decomposition required convex programs eigensolvers contribute computational complexity number qubits quantum processor obvious recurrent application eigensolvers makes convex programs impractical even quantum systems relatively small number qubits ergo improve efficiency qst need complement numerical algorithms efficiently handle large search spaces using limited amount data rigorous performance guarantees purpose work inspired recent advances finding global minimum problems propose application alternating gradient descent qst operates directly assumed structure density matrix algorithm projected factored gradient decent projfgd described based recently analyzed method psd matrix factorization problems added twist inclusion constraints optimization program makes applicable tasks qst general finding global minimum problems hard problem however approach assumes certain regularity conditions however satisfied common protocols practice good initialization make explicit text lead fast provable estimation state system even limited amount data numerical experiments show scheme outperforms practice approaches qst apart qst application aim broaden results efficient recovery within set constrained matrix problems developments maintain connection analogous results convex optimization standard assumptions made however work goes beyond convexity attempt justify recent findings methods show significant acceleration compared convex analogs denote frobenius norm satisfied rank accurate estimation obtained solving essentially convex optimization problem constrained set quantum states consistent measured data two convex program examples minimize setting assume data given form expectation values pauli observables pauli observable given observables total general one needs expectation values pauli observables uniquely reconstruct performing many experiment taking expectation results counts qubit registers since density matrix apply result pauli measurements guarantees robust estimation high probability randomly chosen pauli observables expectation key property achieve restricted isometry property definition restricted isometry property rip pauli measurements let linear map high probability choice pauli observables absolute constant satisfies constant minimize subject quantum state tomography setup begin describing problem qst focusing qst state pauli measurements particular let measurement vector elements measurement error denotes unknown densitynmatrix associated pure quantum state randomly chosen pauli observable normalization chosen nto follow results brevity denote linear sensing map subject captures positive assumption vector euclidean parameter related error level model key programs combination psd trace constraints combined constitute tightest convex relaxation psd structure unknown see also discussed introduction problem convex programs inefficiency applied systems practical solvers iterative handling psd constraints adds immense complexity overhead per iteration especially large see also section work propose use programming qst density matrices leads higher efficiency typical convex programs achieve restricting optimization intrinsic structure psd matrices allow describe psd matrix space opposed ambient space even substantially program theoretical guarantees global convergence similar guarantees convex programming maintaining faster preformace latter properties make scheme ideal complement methodology qst practice iii projected factored gradient decent algorithm optimization criterion recast basis projected factored gradient decent projfgd algorithm transforms convex programs imposing factorization psd matrix factorization popularized burer monteiro solving convex programming instances naturally encodes psd constraint removing expensive projection step concreteness focus convex program order encode trace constraint projfgd enforces additional constraints particular requirement translated convex constraint frobenius norm recast program program minimize subject observe constraint set convex objective longer convex due bilinear transformation parameter space criteria studied recently machine learning signal processing applications added twist inclusion matrix norm constraints makes proper tasks qst show appendices addition complicates algorithmic analysis prior knowledge rank imposed program setting real experiments state system could full rank often dominant eigenvalues case matrix rank much smaller similar methodology therefore projfgd protocol set form contains much less variables maintain optimize psd matrix thus easier update store iterates important issue optimizing factored space existence possible factorizations given see unitary matrix since interested taining solution original space need notion distance solution factors use following distance metric definition let matrices define dist min rkf set unitary matrices projfgd algorithm heart projfgd projected gradient descent algorithm variable pseudocode provided algorithm algorithm projfgd pseudocode input function target rank iterations output initialize randomly set set set step size end first properties objective denote due symmetry gradient respect variable given adjoint operator pauli measurements case consider paper adjoint operator input vector let denote projection matrix onto set particular case already initialization compute denotes projection onto set psd matrices trace bound discuss later text complete step practice main iteration projfgd line algorithm applies simple update rule factors observe input argument performs gradient descent variable step size constants absorbed step size selection clarity two vital components algorithm initialization step step size selection initialization due bilinear structure first glance clear whether factorization introduces spurious local minima local minima exist created substitution necessitates careful initialization order obtain global minimum describing initialization procedure find helpful first discuss initialization procedure altered version trace constraints excluded case transforms minimize setting following theory stems theorem suppose unknown density matrix factorization noiseless model observations satisfy assuming linear map satisfies restricted isometry property definition constant critical point satisfying optimality conditions global minimum corollary suppose unknown full rank density matrix let denote best approximation sense let factorization noiseless model observations satisfy assuming linear map satisfies restricted isometry property definition constant critical point satisfying optimality conditions satisfy dist plain words noiseless model high probability depends random structure sensing map theorem states nonconvex change variables introduce spurious local minima case random initialization sufficient algorithm find global minimum assuming proper step size selection solutions close much close function spectrum best approximation residual corollary cases step algorithm boils random initialization cases hold corresponds povm explicit trace constraint redundant general case trace constraint present different approach followed case initial point set denotes projection onto set psd matrices satisfy represents approximation matrices also means restricted gradient lipschitz continuous parameter defer reader appendix information practice set place algorithm calculation required projection given given described following criterion minimize subject solve problem first compute eigendecomposition unitary matrix containing eigenvectors input matrix due fact frobenius norm invariant unitary transformations proves diagonal matrix computed via minimize subject last part easily solved using projection onto unit simplex alternatively practice could use standard projection set psd matrices experiments show sufficient implemented eigenvalue solver case algorithm generates initial matrix truncating computed eigendecomposition followed projection onto convex set defined set constraints program case note projection operation simple scaling apart procedure mentioned could also use specialized spectral methods initialization alternatively run convex algorithms iterations however choice often leads excessive number full truncated eigenvalue decompositions constitutes nonpractical approach discussion regarding step size type guarantees obtain discussed next step size selection theoretical guarantees focusing provide theoretical guarantees projfgd theory dictates specific constant step size selection guarantees convergence global minimum assuming satisfactory initial point provided let first describe local convergence rate guarantees projfgd theorem local convergence rate qst let quantum state density matrix system factorization let measurement vector random pauli observables corresponding sensing map let step projfgd satisfy denotes leading singular value initial point dist rip constant let estimate projfgd iteration new estimate satisfies dist dist satisfies dist theorem provides local convergence guarantee given initialization point close enough optimal solution particular dist algorithm converges locally linear rate particular der dist projfgd requires log number iterations conjecture translatespinto linear convergence infidelity metric complexity projfgd dominated application linear map multiplications note eigenvalue decomposition matrix multiplication known complexity notation latter least magnitude faster former dense matrices proof theorem provided appendix believe result stated generality complements recent results machine learning optimization communities different assumptions made constraints accommodated far assumed provided dist next theorem shows initialization could achieve guarantee assumptions turn local convergence guarantees convergence global minimum lemma let consider problem satisfies rip property constant assume optimum point satisfies rank computed satisfies srank dist srank initialization introduces restrictions condition number condition number objective function portional particular initialization assumptions theorem satisfied lemma satisfies rip constant fulfilling following expression conditions hard check priori experiments showed initialization well random initialization work well practice behavior observed repeatedly experiments conducted thus method returns exact solution convex programming problem orders magnitude faster related work focus efficient methods qst broader set citations beyond qst defer reader references therein use algorithms qst new dates introduction protocol qst settings assuming multinomial distribution focus normalized negative objective see propose diluted iterative algorithm solution suggested algorithm exhibits good convergence monotonic increase likelihood objective practice despite success argument guarantees performance neither provable setup execution use reparameterization lagrange augmented maximum objective see multinomial distribution assumption authors state problem solved standard numerical procedures searching maximum objective use downhill simplex method solution parameters matrix albeit rely uniqueness solution reformulation due convexity original problem theoretical results nature transformed objective presence spurious local minima consider case maximum likelihood quantum state tomography additive gaussian noise informationally complete case assuming measurement operators traceless simple linear inversion techniques shown work accurately infer constrained state single projection step unconstrained state extension present gpu implementation algorithm recovers simulated density matrix within four hours however implementations linear system inversion could increase dramatically computational storage complexity dimension problem grows based extremal equations multinomial objective propose iteration method hyperparameters step size ascent many iterations required set initial conditions heuristically defined typically methods discussed lead optimization problems resulting slow convergence propose hybrid algorithm starts algorithm space order get initial rapid descent switch accelerated methods original space provided one determine switchover point cheaply multinomial objective initial phase hessian objective computed per iteration matrix along eigenvalue decomposition operation costly even moderate values heuristics proposed completion later phase authors exploit momentum techniques convex optimization lead provable acceleration objective convex state conclusions section acceleration techniques considered factored space constitute interesting research direction theoretical perspective provide convergence convergence rate guarantees use practice general parameterization density matrices ensures jointly positive definiteness unity trace order attain maximum value objective steepest ascent method proposed variables step size arbitrarily selected sufficient small parameter discussion regarding convergence convergence rate guarantees well specific set algorithm step size initialization study qst problem original parameter space propose projected gradient descent algorithm proposed algorithm applies convex objectives convergence stationary points could expected extend work two variants using momentum motions similar techniques proposed polyak nesterov faster convergence convex optimization algorithms operate informationally complete case similar ideas informationally incomplete case found recently presented experimental implementation tomography qubit system pauli basis measurements available achieve recovery practice reasonable time frame hundreds problem authors proposed computationally efficient estimator based factorization resulting method resembles gradient descent factors one presented paper however authors focus experimental efficiency method provide specific results optimization efficiency algorithm theoretical guarantees components initialization step size affect performance step size set sufficiently small constant one first provable algorithmic solutions qst problem convex approximations includes nuclear norm minimization approaches well proximal variants one follows minimize see also theoretical analysis within context mention work accunipdgrad algorithm proposed universal primaldual convex framework sharp operators lieu proximal qst considered application accunipdgrad combines flexibility proximal methods computational advantages conditional gradient methods use algorithm comparisons experimental section presents sparseapproxsdp algorithm solves qst problem objective generic gradient lipschitz smooth function updating putative solution refinements coming gradient way sparseapproxsdp avoids computationally expensive operations per iteration full theory iteration sparseapproxsdp guaranteed compute approximate solution rank achieves sublinear convergence rate however depending sparseapproxsdp might return low rank solution finally propose randomized singular value projection rsvp projected gradient descent algorithm qst merges gradient calculations truncated via randomized approximations computational efficiency overall program tailored tomography quantum states incorporating constraint structure two advantages first results faster algorithm enables deal state reconstruction reasonable time second allows prove accuracy projfgd estimator model errors experimental noise similar results numerical experiments conducted experiments matlab environment installed system ram equipped two intel xeon cache experiments error reported frobenius metric estimation true state note pure state experiments also report infidelity metric also use denote set density matrices algorithm time projfgd time infidelity time time infidelity table values median values independent monte carlo iterations first set experiments compare efficiency projfgd cone convex programs state art solvers within class solvers sedumi methods use rely matlab wrapper cvx experiments observed faster select comparison setting described section consider normalized density matrices wen obtain pauli measurements gaussian measurement error variance consider convex formulations compare projfgd estimator figures use notation cvx cvx simplicity consider two cases table shows median values independent experimental realizations log selection made algorithms return solution close optimum empirically observed projfgd succeeds even cases consider noiseless noisy settings order accelerate execution convex programs set solvers cvx low precision table observe method two orders magnitude faster methods projfgd achieves better performance error metrics cases faster higher qubit case could complete experiments due system crash ram overflow contrariwise method able complete task success within minutes cpu time figures show graphically convex schemes scale function time figure fix dimension study increasing number observations affects performance algorithms observe projfgd observations lead faster convergence hold cone programs figure fix number data points log scale dimension obvious convex solvers scale easily beyond whereas method handles cases within reasonable time time sec comparison projfgd methods cvx cvx projfgd number data points fig dimension fixed rank figure depicts noiseless setting numbers within figure error frobenius norm achieved median values time sec cvx cvx projfgd dimension fig number data points set log rank optimum point set rank figure depicts noiseless setting comparison projfgd methods compare method efficient firstorder methods convex accunipdgrad sparseapproxsdp rsvp consider two settings pure state rank nearly state latter case construct psd satisfying rank psd noise term fast decaying significantly smaller leading eigenvalues words wellapproximate cases model measurement vector noise kek number data points satisfy csam various values csam algorithms assumed rank known use reconstruct approximation methods require svd routine use lansvd propack software package experiments algorithms implemented matlab environment used code parts algorithms initialization use starting point algorithms either specific section iii random stopping criterion use tol set tolerance parameter tol convergence plots figure plots illustrates iteration timing complexities algorithm comparison pure state recovery setting corresponds dimensional problem moreover assume csam thus number data points initialization use proposed initialization section iii algorithms compute extract factor psd approximation project onto apparent projfgd converges faster vicinity compared rest algorithms observe also sublinear rate sparseapproxsdp inner plots reported table contains recovery error execution time results case case solve dimensional problem case rsvp sparseapproxsdp algorithms excluded comparison due excessive execution time appendix provides extensive results similar performance observed values csam figure rightmost plot considers general case nearly wellapproximated density matrix lowrank density matrix case csam rank model increases algorithms utilize svd routine spend cpu time singular calculations certainly algorithm time accunipdgrad projfgd table comparison results reconstruction efficiency qubits csam applies multiplications however latter case complexity scale milder svd calculations metadata also provided table iii setting algorithm sparseapproxsdp rsvp accunipdgrad projfgd setting time time table iii results reconstruction efficiency time reported seconds cases csam completeness appendix provide results illustrate effect random initialization similar projfgd shows competitive behavior finding better solution faster irrespective initialization point timing evaluation total per iteration figure highlights efficiency algorithm terms time complexity various problem configurations algorithm fairly low per iteration complexity expensive operation problem matrixmatrix multiplications since algorithm shows also fast convergence terms number iterations overall results faster convergence towards good approximation even dimension increases figure shows total execution time scales parameters overall performance projfgd shows substantial improvement performance compared algorithms would like emphasize also projected gradient descent schemes also efficient problems due fast convergence rate convex approaches might show better sampling complexity performance csam decreases nevertheless one perform accurate mle reconstruction larger systems amount time using methods problems defer reader appendix due space restrictions rank data points rank data points accunipdgrad projfgd sparseapproxsdp number iterations rsvp sparseapproxsdp rsvp sparseapproxsdp accunipdgrad projfgd rank data points accunipdgrad projfgd cumulative time sec cumulative time sec number iterations cumulative time sec cumulative time sec total time total time fig left middle panels convergence performance algorithms comparison error frobenius norm total number iterations left total execution time cases correspond csam pure state right panel nearly state approximate setting csam rsvp sparseapproxsdp accunipdgrad projfgd rsvp sparseapproxsdp accunipdgrad projfgd rank fig timing bar shows total execution time corresponds different values top panel corresponds csam bottom panel corresponds csam cases noiseless summary conclusions work propose algorithm dubbed projfgd estimating quantum state hilbert space relatively small number data points showed empirically projfgd orders magnitude faster convex programs importantly prove proper initialization projfgd guaranteed converge global minimum problem thus ensuring provable tomography dure see theorem lemma setting model state psd matrix turn means estimator biased towards states however bias inherent qst protocols imposition positivity constraint techniques proofs applied cases proper scenaria beyond ones considered work restricted discussions measurement model random pauli observables satisfies rip conjecture results apply sensing settings informationally complete states see results presented independent noise model could applied noise models stemming finite counting statistics lastly focus state tomography would interesting explore similar techniques problem process tomography conclude short list interesting future research directions immediate goal application projfgd scenaria could completed utilizing infrastructure ibm watson research center could complement results found different quantum system beyond practical implementation identify following interesting open questions first estimator one methods qst experiments beyond use point estimator also used basis inference around point estimate via confidence intervals credible regions however still rigorous analysis factorization used work considers accelerated gradient descent methods qst original parameter space based seminal work polyak nesterov convex optimization methods one achieve orders magnitude acceleration theory practice exploiting momentum previous iterates remains open question proach could exploit acceleration techniques lead faster convergence practice along rigorous approximation convergence guarantees implementations like one remain widely open using approach order accelerate execution algorithm research along directions interesting left future work finally saw numerically random initialization noisy constrained settings works well careful theoretical treatment case open problem shed light notions restricted strong convexity smoothness relate qst objective consider restricted isometry property holds high probability pauli measurements low rank present simplified version definition main text definition restricted isometry property rip linear map satisfies constant satisfied matrices rank according quadratic loss function qst acknowledgments anastasios kyrillidis supported ibm goldstine fellowship amir kalev supported department defense appendix theory notation matrices represents inner product use frobenius spectral norms matrix respectively denote singular value denotes best approximation problem generalization notation definitions expand generality scheme problem setting broader set objectives following arguments hold real complex matrices consider criteria following form hessian given restricted strong convexity suggests restricted set directions small constant correspondence restricted strong convexity smoothness rip obvious lower upper bound quantity drawn restricted set turns linear maps satisfy rip low rank matrices also satisfy restricted strong convexity see theorem assuming rip condition number depends rip constants linear map particular since eigenvalues one show lie restricted matrices observe sufficiently small dimension sufficiently large high probability assume optimum satisfies rank analysis assume know set suggested main text solve factored space follows minimize minimize subject make connection qst objective set apart objective qst theory extends applications described strongly convex functions gradient lipschitz continuity ideas applied similar fashion case restricted smoothness restricted strong convexity state standard definitions square case definition let convex differentiable restricted convex matrices definition let convex differentiable function restricted gradient lipschitz continuous parameter matrices subject qst setting theory mostly focus sets satisfy following assumptions assumption endowed constraint set subset satisfies projection operator say entrywise scaling operation input also require following faithfulness assumption assumption let denote set equivalent torizations lead matrix assume resulting convex set respects structure summarizing faithfulness assumption assume means feasible set contains matrices lead moreover assume convex sets exists mapping two constraints equivalent guaranteed restrict discussion sets assumption satisfied representative example consider qst case analysis use following step sizes qat lipschitz constant qat represents projection onto column space algorithm described main text use following step size given initial different lemma know thus proof work step size equivalent original step size proposed algorithm ease exposition sequence updates current estimate factored space putative solution might belong gradient step observe projection step onto observe constraint cases consider paper case algorithm simplifies algorithm simplicity drop subscript parenthesis parameter values apparent context important issue optimizing factored space existence possible factorizations use following rotation invariant distance metric definition let matrices define dist min brkf theorem let convex compact faithful set projection operator satisfying assumptions described let convex function satisfying definitions let current estimate assume current point satisfies dist given per iteration new estimate projfgd satisfies dist dist satisfies dist applied qst setting obtain following variation theorem theorem local convergence rate qst let quantum state system measurement vector random pauli observables corresponding sensing map let current estimate projfgd assume fies dist rip constant new estimate satisfies dist dist satisfies dist proof theorem analysis make use following lemma chapter characterizes effect projections onto convex sets inner products well provides triangle inequality projections see also figure simple illustration set unitary matrices assume projfgd initialized good starting point lemma let dist denotes value input matrix descending order later text present initialization assumptions leads global convergence results generalized theorem next present full proof following generalization theorem fig illustration lemma start following series equalities dist min due fact kat obtained adding subtracting focusing second term right hand side obtain substitute kat kat initial equation transforms dist dist focusing last term expression obtain observe special case iterates always within projection step equation equals zero recursion identical proof theorem interested case faithfulness assumption observe thus moreover according lemma last ing term equation satisfies thus expression becomes therefore going back original recursive expression obtain dist dist last term use following descent lemma proof provided section lemma descent lemma let restricted convex assumptions theorem following inequality holds true dist using lemma expression get dist dist expression obtained observing lemma concluding proof condition dist naturally satisfied since proof lemma recall define presenting proof need following lemma bounds one error terms arising proof lemma variation lemma proof presented section lemma let strongly convex assumptions theorem ing step size following qat bound holds true dist ready present proof lemma proof lemma first rewrite inner product shown rat follows adding subtracting let focus bounding first term right hand side consider points assumption feasible points smoothness get kat follows optimality since feasible point problem moreover restricted strong convexity get term expression given observe combining equality first term right hand side obtain expression focusing first term let qat nition fact qat combining equations obtain nature projection step easy verify qat qat denoting projection onto column space notice step size using characterization obtain kat kat dist kat let focus term bounded follows kat kat kat kqat transform follows kat follows symmetry follows sequence equalities inequalites combining obtain following bound kat combining expression want lower bound obtain due identity due inequality definition observe qat dist due obtained since substituting qat kat kat kat kat conjecture lower bound generic cases could possibly improved different analysis kat dist kat dist dist finally bound lemma using following see also figure top panel due constant front kat see also figure bottom special case qst assumption always satisfied according following corollary proof provided subsection corollary kakf projfgd inherently satisfies every guarantees without assumptions fig behavior constants depending expression kat due assumption thus last inequality substitute observe combining result obtain bounded follows thus lemma dist case translates dist thus conclude dist dist dist qat qat qat dist completes proof finally due facts proof lemma variant thus term proof lower bound follows qat qat iii qat using obtain dist note follows fact follows psd matrix von neumann trace inequality transformation iii use fact column space span subset span linear combination second term parenthesis first derive following inequalities use apparent later qat qat kqat remind dist dist qat qat dist qat dist qat dist iii qat bound first term right hand side observe qat qat qat use trivial information obtain due triangle inequality due generalized inequality iii due triangle inequality fact column span decomposed column span construction due assumption dist dist matrix combining obtain expression follows due lemma using bound dist hypothesis lemma iii due lemma due facts qat qat dist max dist dist iii iii dist dist dist dist dist proof corollary kat kat qat kat qat qat dist qat initialization restrict attention full rank case case assumes projection step projects time onto psd cone time full rank case convex set includes psd cone well norm constraints described main text particular case qst make restrictions provides efficient projection procedure satisfies constraints holds let denote corresponding projection step constraints satisfied simultaneously initialization propose follows similar motions consider projection weighted negative gradient onto assuming oracle model access though function evaluations gradient calculations provides cheap way find initial point approximation guarantees follows first inequality follows triangle inequality second holds property kabkf kbkf third follows step size bounded hence get qat dist completes proof finally follows using lemma stituting due factor constants lead bounding term constant thus conclude qat dist qat dist qat lemma let consider problem assumed convex optimum point rank apply projfgd initial point generic case satisfies dist appendix initialization srank srank proof show start section present specific initialization strategy projfgd completeness repeat definition optimization problem hand original space minimize subject factored space minimize recall assumption convex projection lemma subject observe feasible point since psd satisfy common symmetric norm constraints ones considered paper hence using strong convexity around get adding subtracting due using smoothness around get case algorithms use initial point projection onto psd cone case use random initialization configurations described caption figure table contains information regarding total time required convergence quality solution cases results almost pure density states provided figure next also provide pseudocode approach input arguments projfgd grad specifies gradient operator case set follows upper bounding quantity follows assumption hence rearranging terms get combining inequality obtain given use lemma obtain dist thus dist srank initialization simple introduces restrictions condition number condition number function finding simple initializations weaker restrictions remains open problem however shown one devise specific deterministic initialization given application practice projection step might easy compute due joint involvement convex sets practical solution would sequentially project onto individual constraint sets let denote projection onto psd cone consider proximate point given perform additional step guarantee special case qst one use procedure appendix additional experiments pseudocode figures show results regarding qst problem respectively case present performance terms number iterations needed well cumulative time required rho rho denotes forward linear operator pauli observables adjoint params matlab structure contains several hyperparameters params contains dimension rank density matrix choice random specific initialization random initial case selects conservative theory versus practical step size maximum number iterations tolerance stopping criterion function rhohat ahat projfgd params random initialization acur acur acur norm acur fro rhocur acur acur rhoprev rhocur use propack lansvd acur options rhocur lansvd options zeros ones norm fro initialization elseif compute gradf zeros ones norm fro rhocur gradf using propack lansvd rhocur options acur sqrt acur acur norm acur fro rhocur acur acur rhoprev rhocur rhocur svds end theory eta elseif practical eta end rhocur grada acur acur acur eta grada norm acur fro acur acur norm acur fro end rhocur acur acur test stopping criterion norm rhocur rhoprev fro norm rhocur fro break end rhoprev rhocur end rhohat acur acur ahat acur altepeter jeffrey kwiat photonic state tomography advances atomic molecular optical physics zhang pagano hess kyprianidis becker kaplan gorshkov gong monroe observation dynamical phase transition quantum simulator arxiv preprint quantum computing research https flammia silberfarb caves minimal informationally complete measurements pure states foundations physics gross liu flammia becker eisert quantum state tomography via compressed sensing physical review letters heinosaari mazzarella wolf quantum tomography prior information communications mathematical physics baldwin deutsch kalev measurements tomography physical review donoho compressed sensing ieee transactions information theory baraniuk compressive sensing ieee signal processing magazine recht fazel parrilo guaranteed solutions linear matrix equations via nuclear norm minimization siam review plan tight oracle inequalities lowrank matrix recovery minimal number noisy random measurements ieee transactions information theory recht exact matrix completion via convex optimization foundations computational mathematics kalev kosut deutsch quantum tomography protocols positivity compressed sensing protocols npj quantum information flammia liu direct fidelity estimation pauli measurements physical review letters liu universal matrix recovery pauli measurements advances neural information processing systems kokiopoulou bekas gallopoulos computing smallest singular triplets implicitly restarted lanczos bidiagonalization applied numerical mathematics baglama reichel augmented implicitly restarted lanczos bidiagonalization methods siam journal scientific computing baglama reichel restarted block lanczos bidiagonalization methods numerical algorithms cullum willoughby lake lanczos algorithm computing singular values vectors large matrices siam journal scientific statistical computing hochstenbach type svd method siam journal scientific computing stathopoulos preconditioned hybrid svd method accurately computing singular triplets large matrices siam journal scientific computing stathopoulos romero extended functionality interfaces primme eigensolver siam news blog haffner riebe becher roos schmidt benhelm korber blatt dur scalable entanglement trapped ions nature sun luo guaranteed matrix completion via nonconvex factorization ieee annual symposium foundations computer science focs zhao wang liu nonconvex optimization framework low rank matrix estimation advances neural information processing systems chen wainwright fast estimation projected gradient descent general statistical algorithmic guarantees arxiv preprint jain jin kakade netrapalli computing matrix squareroot via non convex local search arxiv preprint boczar simchowitz soltanolkotabi recht solutions linear matrix equations via procrustes flow proceedings international conference international conference machine jmlr org bhojanapalli kyrillidis sanghavi dropping convexity faster optimization annual conference learning theory proceedings machine learning research vol edited vitaly feldman alexander rakhlin ohad shamir pmlr columbia university new york new york usa park kyrillidis carmanis sanghavi matrix sensing without spurious local minima via approach artificial intelligence statistics lee matrix completion spurious local minimum advances neural information processing systems park kyrillidis bhojanapalli caramanis sanghavi provable factorization class matrix problems arxiv preprint park kyrillidis caramanis sanghavi finding solutions matrix problems efficiently provably arxiv preprint liang risteski recovery guarantee matrix factorization via alternating updates advances neural information processing systems wang arora haupt liu zhao symmetry saddle points global geometry nonconvex matrix factorization arxiv preprint zhang extended algorithms matrix optimization arxiv preprint wang zhang universal variance catalyst nonconvex matrix recovery arxiv preprint jin zheng spurious local minima nonconvex low rank problems unified geometric analysis arxiv preprint samuel burer renato monteiro nonlinear programming algorithm solving semidefinite programs via factorization mathematical programming samuel burer renato monteiro local minima convergence semidefinite programming mathematical programming eckart young approximation one matrix another lower rank psychometrika stewart early history singular value decomposition siam review lavor projected gradient method optimization density matrices optimization methods software michelot finite algorithm finding projection point onto canonical simplex journal optimization theory applications duchi singer chandra efficient projections onto learning high dimensions proceedings international conference machine learning acm kyrillidis becker cevher koch sparse projections onto simplex international conference machine learning qinqing zheng john lafferty convergent gradient descent algorithm rank minimization semidefinite programming random linear measurements advances neural information processing systems hradil knill lvovsky diluted algorithm quantum tomography phys rev teo hradil informationally incomplete quantum tomography quantum measurements quantum metrology banaszek ariano paris sacchi estimation density matrix physical review paris ariano sacchi maximumlikelihood method quantum estimation aip conference proceedings vol aip nelder mead simplex method function minimization computer journal smolin gambetta smith efficient method computing quantum state measurements additive gaussian noise ical review letters hou zhong tian dong wang nori xiang full reconstruction state within four hours new journal physics jiangwei shang zhengyun zhang hui khoon superfast reconstruction quantum tomography phys rev teo hradil informationally incomplete quantum tomography quantum measurements quantum metrology bolduc knee gauger leach projected gradient descent algorithms quantum state tomography npj quantum information nesterov method solving convex programming problem convergence rate soviet mathematics doklady vol kyrillidis cevher matrix recipes hard thresholding methods journal mathematical imaging vision becker cevher kyrillidis randomized singular value projection international conference sampling theory applications sampta gross flammia monz nigg blatt eisert experimental quantum compressed sensing system nature communications yurtsever quoc dinh cevher universal convex optimization framework advances neural information processing systems hazan sparse approximate solutions semidefinite programs lecture notes computer science sturm using sedumi matlab toolbox optimization symmetric cones optimization methods software toh todd solving programs using mathematical programming cvx research cvx matlab software disciplined convex programming version http chandrasekaran jordan computational statistical tradeoffs via convex relaxation proceedings national academy sciences matthias christandl renato renner reliable quantum state tomography physical review letters jiangwei shang hui khoon arun sehrawat xikun englert optimal error regions quantum state estimation new journal physics agarwal negahban wainwright fast global convergence rates gradient methods highdimensional statistical recovery advances neural information processing systems chen sanghavi general framework highdimensional estimation presence incoherence communication control computing allerton annual allerton conference ieee bubeck convex optimization algorithms complexity foundations machine learning mirsky trace inequality john von neumann monatshefte mathematik show experiments section random initialization performs well practice without requiring additional calculations involved however random initialization constraint case provides guarantees whatsoever rank data points accunipdgrad projfgd accunipdgrad projfgd accunipdgrad projfgd rank data points rsvp sparseapproxsdp number iterations accunipdgrad projfgd number iterations rsvp sparseapproxsdp rank data points accunipdgrad projfgd number iterations rank data points number iterations number iterations rsvp sparseapproxsdp rsvp sparseapproxsdp number iterations rsvp sparseapproxsdp accunipdgrad projfgd rsvp sparseapproxsdp rank data points rank data points cumulative time sec cumulative time sec cumulative time sec cumulative time sec cumulative time sec cumulative time sec fig convergence performance algorithms comparison total number iterations top total execution time bottom first second third column corresponds csam respectively cases pure state setting initial point cumulative time sec accunipdgrad projfgd cumulative time sec rsvp sparseapproxsdp accunipdgrad projfgd cumulative time sec rank data points rsvp sparseapproxsdp number iterations rank data points cumulative time sec rsvp sparseapproxsdp number iterations number iterations accunipdgrad projfgd rank data points rsvp sparseapproxsdp number iterations number iterations accunipdgrad projfgd number iterations rank data points rsvp sparseapproxsdp accunipdgrad projfgd rsvp sparseapproxsdp rank data points accunipdgrad projfgd rank data points cumulative time sec cumulative time sec fig convergence performance algorithms comparison total number iterations top total execution time bottom consider random initialization algorithms first second third column corresponds csam respectively cases pure state setting csam algorithm rsvp sparseapproxsdp accunipdgrad projfgd csam total time csam rsvp sparseapproxsdp accunipdgrad projfgd total time csam csam csam rsvp sparseapproxsdp accunipdgrad projfgd total time csam csam rsvp sparseapproxsdp accunipdgrad projfgd csam csam csam csam table summary comparison results reconstruction efficiency stopping criterion used estimate iteration time reported seconds initial point number iterations cumulative time sec cumulative time sec accunipdgrad projfgd rsvp number iterations number iterations accunipdgrad projfgd rsvp sparseapproxsdp rank data points sparseapproxsdp sparseapproxsdp number iterations rank data points sparseapproxsdp accunipdgrad projfgd rsvp rank data points accunipdgrad projfgd rsvp rank data points number iterations number iterations fig convergence performance algorithms comparison total number iterations left total execution time right two left plots correspond case two right plots case cases csam initial point
8
fooling views new lower bound technique distributed computations congestion amir keren seri christoph dec december abstract introduce novel lower bound technique distributed graph algorithms bandwidth limitations define notion fooling views exemplify strength proving two new lower bounds triangle membership congest model algorithm requires log constant even graphs algorithm must take rounds implication former first proven separation local congest models deterministic triangle membership latter result first lower bound number rounds required even triangle detection limited bandwidth previous known techniques provably incapable giving bounds hope approach may pave way proving lower bounds additional problems various settings distributed computing previous techniques suffice ibm almaden research center technion department computer science ckeren serikhoury supported part israel science foundation grant mpi informatics saarland informatics campus clenzen introduction group players names log bits many days take someone find three players friends computation free beginning players know friends day player send log bits privately friends communication puzzle known triangle detection problem popular congest model distributed computing complexity poorly understood naive protocol player node tells friends friends sends neighborhood every neighbor takes single round local model rounds congest clever randomized protocol izumi gall provides solution log rounds congest essentially know problem work could ruled problem solved rounds even single round even bandwidth open question round complexity triangle detection congest model distributed computing triangle detection extensively studied problem models computation centralized setting best known algorithm involves taking cube adjacency matrix graph runs time found using complex computer program one wishes avoid impractical matrix multiplication problem solved time works designed algorithms sparse graphs graphs listing triangles approximately counting number weighted variants much exhaustive list infeasible moreover conjectures time complexity triangle detection variants among cornerstones complexity see highly algorithms designed settings distributed models quantum computing truly remarkable basic problem lead much research technical perspective open question one best illustrations lack necessity new techniques proving lower bounds distributed computing work present novel lower bound technique providing separation local congest models problem previous techniques provably incapable first progress towards answering open question lower bounds side prior lower bound techniques limits date essentially two techniques deriving lower bounds distributed graph algorithms first indistinguishability technique linial main source lower bounds local model message size unrestricted technique argues algorithm regardless message size seen function maps rhop neighborhood node output topology graph labeled unique log node identifiers input provided nodes randomized algorithms simply give node infinite string unbiased random bits part input congest stands synchronous model bandwidth bits particular standard local congest models correspond congest congest log respectively technique resulted large number locality lower bounds problems open question whether higher lower bounds found congest model see note part appeal one entirely forget algorithm instance coloring algorithm interpreted function assigning color possible correct algorithm must assign distinct colors pair neighborhoods may belong adjacent nodes feasible input graph generally gives rise graph showing rounds insufficient coloring colors equates showing chromatic number graph larger unfortunately technique take bandwidth restrictions account show separation local congest models triangle detection possibly one extreme examples solved single round local seems require rounds congest log additional examples symmetry breaking problems maximal independent set mis upon elaborate section second tool available generating distributed lower bounds information bottleneck technique first introduced implicitly peleg rubinovich idea reduce communication complexity problem typically set disjointness distributed problem argue fast distributed algorithm limited bandwidth would imply protocol exchanging bits approach yields large number strong lower bounds wide range global problems bound distance local change input may affect output examples complete list would justify entire survey lower bounds based information bottlenecks also proven local problems instance drucker show lower bound detecting fixed however technique inherently incapable proving lower bounds many problems particular completely fails problem triangle detection discussed drucker matter divide nodes graph among two players one know triangle one may principle hope lower bounds based communication complexity information complexity date result known intuitively lower bound triangle detection must combine two techniques argue small number bits sent nodes enough information distinguish neighborhood triangle one interestingly drucker prove technique possible without breakthroughs circuit complexity related model nodes send messages nodes indeed still open whether triangle detection solved rounds even powerful model likely know lack lower bounds blamed inability prove computational hardness results similar vein know whether solved polynomial even linear time contrast barrier known standard congest model communication limited input graph embarrassing situation huge gap barrier blame result applies small output triangle detection problem decision problem listing triangles graph tight bound log holds clever usage graphs drucker also able prove strong lower bound triangle detection restriction node sends message neighbors round broadcast fashion specifically show deterministic protocol model therefore also log broadcast requires rounds essentially lower bound holds randomized protocols strong exponential time hypothesis popular conjecture time complexity unfortunately even conjectures know get lower bound standard congest model finally worth pointing subtlety statement communication complexity provide lower bound triangle detection statement fully accurate assumption nodes initially know identifiers neighbors renders trivial infer joint view two neighbors whether participate triangle assumption known kti means knowledge topology distance excluding edges endpoints distance first defined difference node knows identifier node knows also identifiers neighbors focus abundant studies particular concerning message complexity distributed algorithms see references therein note acquiring knowledge neighbors identifiers requires sending log bits edge distinction insubstantial round complexity congest log therefore default assumption throughout wide parts literature however lower bound log round complexity triangle detection follows simple counting argument ultimately goal show lower bounds log consider work contribution paper introduce fooling views technique proving lower bounds distributed algorithms congestion able show first round complexity lower bounds triangle detection separating local congest models triangle one round requires log constant section triangle detection requires rounds even even size network constant size namespace section stress view main contribution bounds bandwidth lower bound algorithms tight hardly comes surprise algorithms need communicate entire neighborhood additionally believe messages extremely fast triangle detection possible rather present novel technique enables separate two models infeasible prior lower bound techniques hope crucial step towards resolving large gap lower upper bounds contrast models justified conditional hardness results basic idea fooling views combine reasoning locality bandwidth restrictions framing terms neighborhood graphs would mean label neighborhoods information nodes initially communication algorithm performs triangle membership problem every node must indicate whether participates triangle edges incident node however communication depends algorithm communication received earlier rounds enforcing challenging inductive reasoning prove lower bounds capture intuition technique think node receives messages neighbors regardless whether participates triangle fooled node intuitively triangle given network one nodes able detect triangle rounds communication may simply inform two nodes triangle round number thus crucial maintain perpetual state confusion nodes involved triangle however task detect whether specific triple ids connected triangle nodes solve simply exchanging one bit communication accordingly goal keep large subset namespace fooled long possible end think triangle none able detect triangle fooled triangle main idea show many fooled triangles rounds many triangles among fooled rounds well order express intuition one ingredients proof following extremal combinatorics result paul theorem theorem hypergraph nodes contains least edges must contain complete hypergraph part size using theorem able show many fooled triangles rounds set nodes triple set fooled triangle rounds blending counting indistinguishability arguments theorem derive lower bound algorithms bound serves proof concept technique power break bounds previous techniques demonstrating indeed possible note purely reasoning runs obstacle model log bits already crossed edge algorithm accordingly argue approach represents qualitative improvement existing techniques proving lower bounds higher requires new ideas hopeful combining technique sophisticated analysis lead much higher lower bounds triangle detection problems discussed additional indication proposed technique wider applicability apply detection showing lower bounds bandwidth algorithms however information bottleneck technique applicable obtain stronger bounds using fooling views details constructions given appendix main body paper focuses triangle detection conclude open questions section related work basic topology components lower bound section triangles main hardness show node deciding whether participates triangle comes knowing neighbors neighbor node unable distinguish two triangles one difference two cases single edge crossing two edges swap endpoints edge crossings previously aided construction lower bounds lower bounds message complexity broadcast symmetry breaking well lower bounds schemes congest model view results congest model proof concept technique rather bound attempts capture true complexity model attracting interest previous work shown cornerstone log algorithms maximal independent set made work even bandwidth problem studied also using standard framework reduction communication complexity lower bound number rounds required detection directly deduced fact lower bounds obtained using framework respect bandwidth therefore imply lower bounds also case model definitions model network nodes polynomial node starts knowledge well ids neighbors known model differs model node starts knowledge nodes communicate synchronous rounds node send messages neighbors model consider congest model bandwidth given focus following problem definition triangle membership triangle membership problem node needs detect whether part triangle remark second lower bound also applies triangle detection problem sufficient node learns triangle without able tell participates also guarantee given algorithm izumi gall bandwidth lower bound triangle membership section show following theorem theorem triangle membership problem solved algorithm congest model unless log observe implies log lower bound lower bound lower bound significantly easier obtain one section advantage demonstrating technique simpler context delving proof theorem simplifying exposition assume nodes send bits round send less bits remain silent easy verify affects constants asymptotic notation main line proof show size messages less log three nodes neighbors receives messages single communication round regardless whether connected see figure standard notion indistinguishability given bandwidth restrictions first time figure node receives messages regardless whether connected simplicity assume node exactly neighbors except special node fixed rest section two neighbors denote set neighbors node let denote message sent single round given following two notions fooling sets nodes fooling nodes allows capture indistinguishability setting specific shapes notion fooling views takes obtaining result definition let set nodes called set another set nodes node called node two sets nodes denote fnodes set nodes first step towards proving theorem show many nodes lemma many fooling nodes log holds proof assume towards contradiction thus non nodes denote set non nodes fnodes denote family sets size nodes fnodes fsets holds fnodes fsets observe must send unique message sets since otherwise two sets fsets therefore definition least one node fnodes implies log contradiction strength many nodes every node implies node node many nodes lemma log node set nodes size holds fnodes proof let indicator defined follows fnodes otherwise since lemma gives therefore must node set nodes size holds lemma allows prove solve triangle membership follows proof theorem let node provided lemma holds sends less log bits two sets nodes let node since definition holds two sets nodes summarize size messages less log two nodes connected receives messages single communication round regardless whether connected see figure round lower bound triangle detection main goal section prove following theorem theorem algorithm triangle membership congest model requires rounds even graphs maximum degree fact one show many nodes use section hard instances setting simple tripartite graphs degree follows family tripartite let graphs holds every node exactly one neighbor nutshell show must nodes detect whether constitute two triangles remarkably simple intuitive task requires develop novel machinery make use known results extremal graph theory approach compelling feature one round setting triangle detection equivalent triangle membership node detecting triangle infer part triangle needs inform neighbors thus obtain lower bound general possibly easier problem triangle detection moreover lower bound depends set possible ids actual number nodes graph theorem algorithm triangle detection congest model requires rounds even graphs maximum degree nodes given namespace size main challenge proving lower bound one round communication nodes round depend ids sets neighbors case first round rather depends also views rounds set messages nodes receive first rounds view node rounds may depend nodes neighborhood nevertheless since family graphs defined holds node part exactly one neighbor two parts cycle must connected component therefore triple connected triangle communication nodes round depends nodes similarly communication nodes round depends nodes main line proof show algorithm triangle membership receives messages exist first log rounds two different scenarios first scenario participates triangle second scenario participates see figure illustration denote message sent given three nodes round given connected triangle similarly given six nodes denote message sent round given nodes connected structure proof lower bound number rounds follows section present notion fooling sets triangles shares similar spirit definition basis fooling views technique section next section present connection deduce main result fooling sets triangles definition fix pair nodes let set triangles called set triangles figure structures use triangle sends message first rounds triangles formally extend notion fooling set triangles notion fooling rectangle triangles definition fix node rectangle called rectangle triangles satisfies following properties holds set triangles holds set triangles finally extend notion fooling rectangle triangles fooling cube triangles definition cube called cube triangles holds rectangle triangles holds rectangle triangles holds rectangle triangles following observation follows immediately definition observation cube triangles next step towards proving theorem show cube triangles size contains cube triangles size log log log lemma cube triangles size constant cube triangles size log log log key combinatorial ingredient proving lemma following corollary theorem corollary theorem every constant sufficiently large boolean cube contains least entries contains subcube side subcube size log prove lemma first prove weaker claim guarantees existence sufficiently large cube triangles satisfying first property definition prove existence cube holds rectangle triangles log applying claim different sides cube gives lemma claim cube triangles size constant cube size log holds fooling rectangle triangles proof fix consider messages receive neighbors round since holds exist nodes denote specific message therefore defining indicator variable follows otherwise gives implies follows least nodes set nodes size holds furthermore holds nodes mxi denote specific message indicator defined follows let otherwise boolean cube size argument implies notice contains least entries contains subcube therefore corollary cube side subcube size log log denote subcube claim follows proof lemma finish proof lemma apply claim three sides cube triangles deduce constant fooling cube triangles size log log log far observation lemma sufficiently many triangles nodes receive messages neighbors remains actually capture two different scenarios node participating triangle node keep receiving messages next discuss triangles defined fooling structures triangles previous section final step towards proving theorem presenting connection fooling structures triangles start following definitions definition let two disjoint triples triangle called fooling receives messages first rounds triangle formally capture connection triangles extend notion cube triangles notion cube definition cube called cube cube cube triangles see definition pair disjoint triples holds triangle see definition triangle fooling triangle goal next show log cube size following observation follows immediately definition observation cube size general life good know cube somewhere wild however interested lower bound number rounds need show amazing cube contains sufficiently large sets fooling cubes many rounds need prove following lemma lemma cube size cube size log log log proof first property definition cube cube triangles therefore lemma cube triangles size log log log show also cube show pair disjoint triples satisfies properties definition start property first prove pair disjoint triples holds triangle observe since fooling cube property definition holds therefore order show triangle remains show holds also need show observe since cube triangles holds furthermore since also cube property definition two triples holds means view rounds case participates triangle case participates turn implies sends message round two scenarios combining equation gives similarly property definition two triples holds implies combining equation gives completes proof pair disjoint triples holds triangle fooling cube satisfies property definition symmetric arguments also satisfies properties definition proof theorem observe sufficient prove log cube size observation cube size furthermore lemma fooling cube size cube size log log log therefore applying lemma repeatedly times implies log cube size discussion open questions type contribution work would complete without pointing many additional open questions fooling views technique possible candidate road making progress first raise question whether specification triangle membership problem inherently different complexity triangle detection latter standard way phrasing decision problems distributed computing everyone outputs instance someone outputs yes yes instance believe bound give algorithms section hold also triangle detection seems require deeper technical analysis lower bound number rounds congest hold also triangle detection explained section note sublinear algorithm solves triangle detection clear make solve triangle membership problem triangle listing triangles need output work also gives first sublinear algorithm triangle listing completing log rounds congest model well log lower bound see also open question triangle membership triangle detection triangle listing problems different complexities congest model results paper deterministic algorithms see technical obstacle making work randomized algorithms well open question deterministic complexity triangle strictly larger randomized complexity congest model section gives tight bound bandwidth required algorithm appendix address bandwidth algorithms question asked problems even know yet exact round complexity various symmetry breaking problems open question bandwidth complexity distributed algorithms various problems whether gaps local congest models occur symmetry breaking problems central open question hope technique shed light upon prime examples mis known whether indeed gap reason reductions communication problems provably incapable proving lower bounds problems partial solution obtained greedy algorithm extendable valid solution entire graph means one player solve problem set nodes deliver state nodes boundary player completing task another way see arguing communication suffice notice simulating sequential greedy algorithm requires fact little communication total despite taking many rounds open question various symmetry breaking problems complexity congest model strictly higher counterpart local model acknowledgements grateful michal dory eyal kushilevitz merav parter stimulating discussions also grateful ivan rapaport eric remila nicolas schabanel point made helped significantly simplifying section references abboud backurs williams current clique algorithms optimal valiant parser focs pages abboud vassilevska williams popular conjectures imply strong lower bounds dynamic problems focs pages abboud vassilevska williams matching triangles basing hardness extremely popular conjecture stoc pages afrati fotakis ullman enumerating subgraph instances using mapreduce data engineering icde ieee international conference pages ieee alon babai itai fast simple randomized parallel algorithm maximal independent set problem algorithms alon yuster zwick journal acm jacm awerbuch goldreich peleg vainish information communication broadcast protocols acm bansal williams regularity lemmas combinatorial algorithms theory computing naor naor algorithms distributed computing baruch fraigniaud randomized schemes proceedings acm symposium principles distributed computing podc pages belovs span programs functions stoc pages acm pagh williams zwick listing triangles international colloquium automata languages programming pages springer brandt fischer hirvonen keller rybicki suomela uitto lower bound distributed local lemma stoc pages buhrman durr heiligman hoyer magniez santha wolf quantum algorithms element distinctness ccc pages ieee fischer schwartzman vasudev fast distributed algorithms testing graph properties disc pages springer kaski korhonen lenzen paz suomela algebraic methods congested clique podc pages acm kavitha paz yehudayoff distributed construction purely additive spanners proceedings international symposium distributed computing disc paris france september pages khoury paz quadratic lower bounds congest model disc also corr chan speeding four russians algorithm one logarithmic factor soda pages czumaj lingas finding heaviest triangle harder matrix multiplication siam journal computing dolev lenzen peled tri tri finding triangles small subgraphs distributed setting disc pages springer drucker kuhn oshman power congested clique model proceedings acm symposium principles distributed computing podc pages elkin simple deterministic distributed mst algorithm time message complexities podc pages acm extremal problems graphs generalized graphs israel journal mathematics sep fraigniaud rapaport salo todinca distributed testing excluded subgraphs disc pages springer frischknecht holzer wattenhofer networks compute diameter sublinear time proceedings annual symposium discrete algorithms soda pages wigderson randomized communication complexity set disjointness theory computing henzinger krinninger nanongkai saranurak unifying strengthening hardness dynamic problems via online multiplication conjecture stoc pages acm itai rodeh finding minimum circuit graph siam journal computing izumi gall triangle finding listing congest networks proceedings acm symposium principles distributed computing podc pages jukna extremal combinatorics applications computer science texts theoretical computer science eatcs series springer kari matamala rapaport salo solving induced subgraph problem randomized multiparty simultaneous messages model sirocco pages springer king kutten thorup construction impromptu repair mst distributed network communication podc pages acm kolountzakis miller peng tsourakakis efficient triangle counting large graphs via vertex partitioning internet mathematics kothapalli scheideler onus schindelhauer distributed coloring log bit rounds proceedings international parallel distributed processing symposium ipdps kuhn moscibroda wattenhofer local computation lower upper bounds acm kushilevitz nisan communication complexity cambridge university press gall improved quantum algorithm triangle finding via combinatorial arguments focs pages ieee gall powers tensors fast matrix multiplication proceedings international symposium symbolic algebraic computation pages acm gall nakajima multiparty quantum communication complexity triangle finding tqc appear gall nakajima quantum algorithm triangle finding sparse graphs algorithmica lee magniez santha improved quantum query algorithms triangle finding associativity testing soda pages siam lenzen improved distributed steiner forest construction podc pages linial distributive graph algorithms global solutions local data proceedings annual symposium foundations computer science focs pages linial locality distributed graph algorithms siam luby simple parallel algorithm maximal independent set problem siam magniez santha szegedy quantum algorithms triangle problem siam journal computing robson zemmari optimal bit complexity randomized distributed mis algorithm distributed computing nanongkai sarma pandurangan tight unconditional lower bound distributed randomwalk computation podc pages acm naor lower bound probabilistic algorithms distributive ring coloring siam discrete pai pandurangan pemmaraju riaz robinson symmetry breaking congest model algorithms ruling sets disc also corr pai pandurangan pemmaraju riaz robinson symmetry breaking congest model algorithms ruling sets podc appear pandurangan robinson scquizzato tight bounds distributed graph computations corr pandurangan robinson scquizzato distributed algorithm minimum spanning trees stoc pages acm peleg distributed computing approach society industrial applied mathematics peleg rubinovich lower bound time complexity distributed spanning tree construction siam razborov distributional complexity disjointness theor comput sarma afrati salihoglu ullman upper lower bounds cost computation vldb volume pages vldb endowment sarma holzer kor korman nanongkai pandurangan peleg wattenhofer distributed verification hardness distributed approximation siam schank wagner finding counting listing triangles large graphs experimental study wea pages springer szegedy quantum query complexity detecting triangles graphs arxiv preprint williams multiplying matrices faster stoc pages acm williams hardness easy problems basing hardness popular conjectures strong exponential time hypothesis invited talk international proceedings informatics volume williams williams subcubic equivalences path matrix triangle problems focs pages improved combinatorial algorithm boolean matrix multiplication icalp pages springer membership immediate question whether get lower bounds bandwidth additional roundoptimal algorithms show generalize lower bound technique apply detecting membership larger cycles however curiously show section sake comparison cycles larger standard approach reductions communication complexity problems allows stronger lower bounds algorithm solving membership completes within exactly rounds simple standard indistinguishability argument even unlimited bandwidth rounds communication node may information nodes distance therefore rounds communication node distinguish whether participates see generalize technique observe first lower bound given theorem holds even specific node given input nodes needs solve triangle membership problem helpful extending lower bound case hence define problem formally figure constructing definition triangle membership triangle membership problem nodes given identity specific node node needs detect whether part triangle proof theorem actually proves following theorem theorem triangle membership problem solved rithm congest model unless log formally extend membership problem larger cycles follows definition membership membership problem node needs detect whether part theorem used prove following theorem let constant membership problem solved optimal algorithm congest model unless log constant prove theorem showing reduction triangle membership problem given definition show algorithm solving membership problem used solve triangle membership problem proof theorem first show construct appropriate instance membership problem given instance triangle membership problem start describing construction odd value show tweak handle even values well let instance triangle membership problem let set edges incident node odd define instance membership problem follows replace edge path puv length going new nodes denoted containing edges example replace edge path three edges going two new intermediate nodes see figure formally new graph defined uvi following observation follows directly construction observation odd node participates participates triangle since algorithm may use ids nodes need also assign unique ids nodes consistent arbitrary manner say assigning idg uvi bin nodes bin binary representation notice next assume towards contradiction algorithm solves membership problem optimal number rounds congest model log show algorithm solves triangle membership problem single round congest model log contradicts theorem completes proof values constant construct nodes simulate nodes running algorithm follows node sends neighbors message sends first round neighbor sends message sends neighbors node messages node receives first round backwards induction local simulation node knows view node uvi end first rounds implies knows message sent round since rounds node knows whether observation thus knows single round whether triangle handle even value construct similar manner replacing edge path nodes addition edge replaced path length two consists additional node similarly observation node participates participates triangle case odd given algorithm membership simulation rounds requires single round communication therefore reduction carries even values well observe therefore case odd achieve asymptotic lower bound triangle membership problem value membership using communication complexity show lower bound bandwidth needed algorithm solving membership using standard framework reduction communication complexity problem simplicity show even values similar construction works odd values well bound larger compared proof section theorem membership problem solved deterministic algorithm congest model unless log solved randomized algorithm succeeds high congest model unless integers constant order prove theorem show reduction communication complexity problem disjk communication complexity problem consists function true false two strings given inputs two players alice bob respectively players exchange bits communication order compute according protocol communication complexity protocol computing maximal number bits taken input pairs exchanged alice bob communication complexity minimum taken protocols compute problem disjk players alice bob receives input string containing exactly ones function disjk whose output false index true otherwise observe function constant deterministic communication complexity disjk known log randomized communication complexity known reduction adapt formalization family lower bound graphs given setting definition definition simplified family lower bound graphs fix integer function true false family graphs said family lower bound graphs membership following properties hold fixed set nodes graphs denote partition existence edges may depend existence edges may depend node participates iff false observe given family lower bound graphs disjk membership alice bob simulate algorithm membership checking output end algorithm solve disjk say event occurs high probability occurs probability constant proved rank method proving lower bounds deterministic protocols communication complexity read rank method see example section proof lower bound rank see page figure reduction communication complexity membership proof theorem organized follows first construct family lower bound graphs next show given algorithm alg membership messages size alice bob simulate alg exchanging bits construct following family lower bound graphs describing fixed graph construction generalize family graphs show family lower bound graphs disjk membership fixed graph construction fixed graph construction figure consists tree path size tree tree root node denoted connected two nodes root tree depth denote leaves leaves set leaves tree rooted set leaves tree rooted respectively define leaves leaves respectively adding edges corresponding inputs players alice bob receives input set nodes size alice connects nodes input leaves tree rooted leaf connected nodes similarly bob connects nodes input leaves tree rooted leaf connected nodes following observation follows directly construction observation node participates cycle length two sets alice bob disjoint therefore given algorithm alg membership alice bob simulate alg solve size input stings observe constant holds log log remains show given algorithm membership messages size alice bob simulate alg exchanging bits proof theorem let alg algorithm solving bership problem let two sets messages sent rounds message sent round crucial observation alice bob simulate nodes first rounds without communication neighborhoods nodes fixed therefore order players compute output rounds suffices alice send bob message bob send alice message observation implies bits suffice players correctly compute disjk therefore lower bounds disjk deterministic algorithm membership requires messages size log log randomized optimals round algorithm succeeds high probability requires integers constant mention one use function instead disjk obtain lower bound simply connecting leaf single node path based inputs however gives rather strong bound respect number leaves must also linear bound occur given construction
8
time parallel gravitational collapse simulation andreas kreienbuehl dec center computational sciences engineering lawrence berkeley national laboratory cyclotron road berkeley united states america pietro benedusi institute computational science faculty informatics della svizzera italiana via giuseppe buffi lugano switzerland daniel ruprecht school mechanical engineering university leeds woodhouse lane leeds united kingdom rolf krause institute computational science faculty informatics della svizzera italiana via giuseppe buffi lugano switzerland abstract article demonstrates applicability method parareal numerical solution einstein gravity equations spherical collapse massless scalar field account shrinking spatial domain time tailored load balancing scheme proposed compared load balancing based number time steps alone performance parareal studied black hole case experiments show parareal generates substantial speedup regime reproduce choptuik black hole mass scaling law introduction einstein field equations general relativity consist ten coupled partial differential equations pdes gravity couples forms energy enormous dynamic range spatiotemporal scales hence usually application advanced numerical methods provide solutions numerical relativity extensive use computing hpc made today almost hpc architectures massively parallel systems connecting large numbers compute nodes interconnect numerical simulations power systems harnessed algorithms feature high degree concurrency every algorithm strong serial dependencies provide inferior performance massively parallel computers solution pdes parallelization strategies developed mainly spatial solvers however light addresses akreienbuehl date december mathematics subject classification key words phrases gravitational collapse choptuik scaling parareal spatial coarsening load balancing speedup time parallel gravitational collapse simulation rapid increase number cores supercomputers methods offer additional concurrency along temporal axis recently begun receive attention idea introduced time spacetime multigrid methods studied recently widely used time parallel method parareal proposed recently introduced methods pfasst ridc mgrit historical overview offered given demonstrated potential integration methods parallel simulations methods could beneficial numerical relativity community however application straightforward often unclear priori good performance achieved article therefore investigate principal applicability time parallel parareal method solving einstein equations describing spherical gravitational collapse massless scalar field system also referred system equivalent equation expressed context curved geometry defines basic gravitational field theory interest therefore numerical relativity also quantum gravity summary numerically derived results given work choptuik brought forward novel physical results particular interest show parareal correctly reproduces expected mass scaling law mathematical theory shows parareal performs well diffusive problems constant coefficients diffusive problems coefficients numerical experiments show parareal converge quickly however given theory basic hyperbolic pdes expected parareal applied convection dominated problems converges slowly meaningful speedup possible special cases reasonable performance discussed certain hyperbolic pdes found form stabilization required parareal provide speedup surprisingly stabilization required equations describing gravitational collapse demonstrate plain parareal achieve significant speedup detailed analytical investigation case would definitely interest left future work one reason could solve characteristic coordinates discretization aligned directions propagation article structured follows section define system einstein field equations solve using parareal addition give details numerical approach discuss interplay parareal particular structure spatial mesh section discuss parareal method section numerical results presented finally section conclude summary discussion equations gravitational collapse einstein field equations planck units normalized index time via space via matter content specified definition tensor possibly along equations state together satisfy continuity equations equation defines set ten partial differential equations ten unknown metric tensor field components generality equations coupled nature six ten equations hyperbolic evolution equations remaining four elliptic constraints initial data represent freedom choose spacetime coordinates matter content consider minimally coupled massless scalar field tensor metric tensor field spherical symmetry natural introduce parametrization terms schwarzschild coordinates time coordinate stationary observer infinite radius omit addition cosmological constant term side equation observations suggest see term impact black hole formation studied neglected use einstein summation convention time parallel gravitational collapse simulation measures size spheres centered resulting einstein field equations analyzed numerically particular adaptive mesh refinement used resolve black hole formation physics investigation carried double null characteristic coordinates without mesh refinement see however finally effect quantum gravity modifications collapse studied adjusted characteristic coordinates use characteristic coordinates well exclude quantum gravity modifications also simplicity refer time coordinate space coordinate making ansatz sin metric tensor field using auxiliary field spacetime geometry along auxiliary field matter content complete field equations see overall system seen wave equation massless scalar field curved geometry boundary conditions regularity implies boundary consistent initial data choose gaussian wave packet exp also performed tests initial data similar shape hyperbolic tangent function much like choptuik purely serial time stepping since case found parareal performance resemble strongly case gaussian wave packet include results initial scalar field configuration thus characterized amplitude mean position width depending value parameters solution equations describe bounce wave packet black hole formation near boundary black hole appears outward null expansion measures relative rate change area element congruence null curves approaches zero black hole mass evaluated point toward vanishes numerical solution numerical grid depicted figure parametrized characteristic coordinates used numerical integration used coordinate representing time coordinate representing space integration thus takes place right triangle initial data defined along lower leg clearly spatial domain becomes smaller solution advanced note domain exactly right triangle corner small missing buffer zone extent needed spatial part numerical stencil fit computational domain thus consists points time stepping method solution equations use laxwendroff richtmyer method fine spacetime grid see employ time parallel method parareal see section need second computationally cheap time integration method time parallel gravitational collapse simulation field coordinate numerical domain parametrized characteristic coordinates scalar field solution snapshots black setting peak gaussian evolves along constant coordinate value also bounce occurs figure computational domain left gravitational scalar field evolution right choose explicit euler method coarse spacetime mesh parareal efficient cost coarse method small compared fine one choosing simple method coarse grid obtain good ratio see section optimal speedup right balance difference accuracy difference cost found integration space equations use method snapshots scalar field evolution resulting chosen fine grid discretization shown figure evolves along constant lines bounce occurs figure also shows size domain decreases evolution left boundary mass scaling practice simulation terminates black hole forms grows without bound case see details figure provides simplified illustration black hole region dotted portion shows simulation comes halt dashed line thus determine black hole mass record minimal expansion values via scalar derived equation last recorded minimal value termination simulation defines characteristic coordinate see figure use define via equation scalar approaches nears shown lower portion figure based numerical experiments choptuik presents among things relation amplitude gaussian equation black hole mass shows critical value bounce case black hole case based thereon demonstrates black hole mass scales according law positive constant value various initial data profiles demonstrate parareal correctly capture black hole mass scaling law although coarse level euler method alone also parareal requires less time beneficial investigation demanding critical solution requires simulation numerous black holes analysis however omitted article left future work time parallel gravitational collapse simulation scalar scalar coordinate simulation terminates black hole forms minimal weighted outward null expansion indicating bounce top black hole formation bottom shown figure illustrations clarify gravitational collapse parareal algorithm parareal method solution initial value problems outlined previous section comes discretizing equations marks end time parareal starts decomposition time domain npr temporal subintervals tss defined terms times npr npr denote serial time integration method high accuracy cost case richtmyer method cheap possibly much less accurate method case explicit euler method instead running fine method subinterval subinterval serially time parareal performs iteration index time process number npr iterations nit advantage expensive computation fine method performed parallel tss assume number tss equal number npr cores processes used time direction good speedup obtained fast comparison still accurate enough parareal converge rapidly see section detailed discussion parareal speedup section hinted interchangeability characteristic coordinates numerical integration therefore theoretically parareal could also used spatial integration simultaneously parallelize time space however interweaving two parareal iterations discussed article put aside future work spatial coarsening parareal order make cheaper improve speedup use less accurate time stepper also employ coarsened spatial discretization reduced number therefore need spatial interpolation restriction operator case see parareal algorithm given time parallel gravitational collapse simulation restriction operator use point injection interpolation operator use polynomial lagrangian interpolation order shown even simple toy problems convergence parareal deteriorate spatial coarsening interpolation used demonstrated section also holds true studied problem implementation implemented two different realizations parareal standard version pst see listing parareal correction computed uniformly prescribed iteration number contrast modified implementation pmo see listing parareal corrections performed tss solution may yet converged parareal always converges rate least one per iteration iterate assigned mpi rank greater equal current parareal iteration number see line listing otherwise iterations needed performed process remains idle thus iteration progresses processes enter idle state implementation realized future work criterion convergence used replaced check residual tolerance could negatively affect observed performance since requires essentially one iteration compute also bears mentioning recently demonstrated integration methods good candidates provide fault tolerance another difference standard modified implementation former time parallel fine evolution copy fine grid solution created see line listing modified listing copying circumvented use two alternating indices lines respectively iteration number determines value turn determines fine grid solution buffer used send receive data means corresponding mpi routines see lines listing two implementations also slightly different requirements terms storage seen line listing pst first equivalently first mpi rank fine grid solution assigned initial data beginning iteration requires one additional buffer held storage implementations need one coarse grid solution buffer three fine grid buffers speedup denote rco coarse rfi fine time stepper runtime recalling nit denotes number iterations required parareal converge given npr processes parareal theoretically achievable speedup nit npr rfi nit rco min npr rfi npr nit rco discussed estimate valid ideal case runtimes across subintervals perfectly balanced presence load imbalances time however differences runtimes across tss maximum speedup reduced spatial domain consider shrinking time tailored decomposition time axis used provide well balanced computational load discussed next section load balancing integrate triangular computational spacetime domain see figure straight forward uniform partitioning time axis results imbalanced computational load time first load balancing strategy henceforth refer based straight forward basic decomposition time axis assigns number time steps without regard computational cost shrinking domain tss later times carry fewer spatial runtimes rco rfip coarse fine time stepper respectively larger earlier tss later ones figure shows partition leads imbalanced computational load time portion extending across covers larger area thus larger number grid points portion also tested barycentric interpolation found performance terms runtimes speedup see sections inferior version parareal discussed used proceed integration beyond given end time based optimized scheduling tasks become idle implementation time parallel gravitational collapse simulation itializa tion coarse npr prediction coarse nit iteration npr mpi recv else init npr coarse npr mpi send ini tializa tion coarse npr prediction coarse nit iteration npr mpi recv npr coarse npr mpi send standard parareal implementation pst modified parareal implementation pmo figure pseudo code standard modified parareal implementation variable denotes coarse grid solution array three fine grid buffers imbalanced load time load balancing balanced load time load balancing figure illustration two different approaches decomposition time domain left right figure suggests early time tss shorter extent time later ones thus second strategy following refer also consider cost time steps order balance runtime rco rfip processes use decomposition time axis tss sum total coarse fine runtime balanced tss rco rfi npr rco rfip process done bisection approach making use fact use explicit rather implicit time integrators discussion thus cost time step directly proportional number spatial therefore total spacetime domain first divided two parts roughly equal number grid points time parallel gravitational collapse simulation sketched figure part divided required number tss reached note limits possible numbers tss powers figure shows traces one simulation featuring figure one figure horizontal axes correspond runtime vertical axes depict mpi rank numbers lower upper case three parareal iterations performed green regions indicate coarse fine integrators carrying work time spent mpi receives including waiting time shown red observe leads load imbalance incurs significant wait times processes handling later contrast processes idle times shown red mpi receives almost invisible case elimination wait times leads significant reduction runtime increase speedup shown section vampir trace parareal runtime rpa vampir trace parareal runtime rpa figure vampir traces implementation pmo npr nit two different load balancing strategies results speedup runtime measurements performed cray supercomputer piz swiss national supercomputing centre cscs lugano switzerland features compute nodes hold two intel xeon processors results total compute cores peak performance pflops occupies position november piz dora used gnu compiler version runtimes provide include cost operations simulations measuring convergence performed machine located della svizzera italiana maintained members institute computational science faculty results presented following use coarse grid resolution fine grid resolution also determined reference solution approximately measure serial fine stepper discretization error used serial fine time stepper step size first consider case black holes form figure shows npr two different sets initial data parameters relative defect rfi krfi time parallel gravitational collapse simulation measures difference parareal solution iterations serial fine solution rfi function characteristic coordinate figure use initial data parameters results early bounce wave packet simulations figure values leads late bounce defects plotted nit along serial coarse fine solution estimated discretization error krco rre krfi rre labeled coarse fine respectively observe figure data nit somewhat jagged various start end times tss near bounce region case parareal converges two iterations nit defect discretization error fact without bounce region near one iteration would required convergence late bounce scenario figure also observe rate convergence final time gives indication convergence following thus focus convergence final time convergence evolved field shown found least good nit nit nit nit coarse fine coordinate parareal defect time early bounce scenario defect npr defect npr nit nit nit nit coarse fine coordinate defect parareal time late bounce situation figure defect parareal fine method time fixed npr figures illustrate defect parareal end simulation various values npr interpolation left interpolation right interpolation parareal converge configuration stalls defect iteration count equals npr parareal converges definition provide speedup contrast parareal shows good convergence behavior interpolation npr less defect parareal falls approximate discretization error fine method single iteration otherwise npr npr two iterations required resulting speedups correspondingly adjusted values nit shown figure load balancing strategies see discussion section addition projected speedup according equation shown ratio rfi determined experimentally found npr advanced load balancing speedup closely mirrors theoretical curve basic load balancing performs significantly worse npr measured speedups fall short theoretical values peak npr start decrease note theoretical model blue line figure take account scaling limit serial correction step according amdahl law difference theory measured speedup therefore due overheads communication transfer meshes analysed seems unaffected load balancing tests documented found takes two iterations parareal converge well time parallel gravitational collapse simulation although load balancing strategy results significantly better speedup basic approach peak value provided schemes essentially increasingly large numbers cores computational load per eventually becomes small imbalances computational load insignificant instead runtime dominated overhead communication time communication load independent chosen load balancing depends solely number tss every one message sent received per iteration save first last therefore expected ultimately approaches load balancing lead comparable peak values demonstrate saturation speedup related significant increase time spent mpi routines eventually communication cost starts dominate computational cost left time slice time parallelization saturates spatial parallelization defect defect npr npr npr npr npr coarse fine theory speedup npr npr npr npr npr coarse fine iterations nit defect late bounce interpolation order iterations nit defect late bounce interpolation order cores npr parareal speedup interpolation figure parareal performance case terms convergence polynomial interpolation orders terms speedup figure illustrates reason behind speedup beyond npr first define rpa rco rfip rst rst denotes runtime spent stages different coarse fine integration assigned process consider overhead sending receiving data well interpolation overheads analyzed next introduce total overhead sum oto rst also runtime spent neither coarse fine integrator given average overhead defined geometric mean value oto tss pnpr oto oav npr finally define relative overhead individual stages rst rpa rpa runtime parareal processor ideally assumed derivation speedup model given equation rco rfip dominant costs case rco rfip rpa according equation oto therefore oav definition however seen figure oav small small values npr npr increases rapidly indicates ost time parallel gravitational collapse simulation overhead communication sources starts play dominant role npr increased figure shows relative overhead equation npr npr three different stages interpolation send receive send receive refer corresponding mpi routines significant increase relative overhead three stages number cores grows causing eventual speedup increasing npr overhead ost npr overhead oav cores npr average overhead receive npr receive npr interpolate npr interpolate npr send npr send npr core overhead caused three different parareal stages figure overhead communication sources increases npr leads parareal speedup decay consider complex case black hole forms time simulation goal compute black hole position via equation mass determined equation see section characteristic coordinates allow continue simulation past black hole formation event need way keep simulation terminating approaches see figure avoid need adaptively modify decomposition time domain carry supercritical case study using initial data parameter values near also used results figure parameters particular investigated partitions time axis npr black hole generated fine time integrator forms last unless becomes large fix thus parareal used tss except last one fine method executed compute black hole position implementation uses approach prevent complete termination simulation radicand definition equation fails exception thrown parareal iteration continue parareal iteration converges better better starting values provided last accuracy computed black hole position improves general implementation aiming production runs would need allow black hole formation tss last one left future work article focus lies investigating principal applicability parareal simulation gravitational collapse figure depicts choptuik scaling results solutions computed parareal npr first three iterations table lists generated values see section errors compared value provided fine integrator agrees result seen figure coarse integrator alone adequately resolve black holes small visible wrong means coarse method coarse sense correctly capture physics underlying investigated problem nonetheless parareal capable generating correct black hole physics one iteration time parallel gravitational collapse simulation theory mass speedup coarse nit nit nit fine criticality choptuik scaling parareal cores npr parareal speedup figure parareal performance case value coarse nit nit nit fine error value error table approximate values relative errors critical amplitude resulting straight line slope figure visualizes speedup achieved case including theoretical estimate according equation numbers iterations required parareal converge derived analysis like one plotted figure case basically values identical processes good speedup close theoretical bound observed larger core numbers however speedup reaches plateau performance longer increasing case npr increases computing times per eventually become small parareal runtime becomes dominated communication see figure even though temporal parallelization eventually saturates substantial acceleration almost factor using cores time possible corresponding parallel efficiency conclusion article assesses performance integration method parareal numerical simulation gravitational collapse massless scalar field spherical symmetry gives overview dynamics physics described corresponding einstein field equations presents employed numerical methods solve system formulated solved characteristic coordinates computational spacetime domain triangular later time steps carry fewer spatial strategy balancing computational cost per subinterval instead number steps discussed benefits demonstrated traces using vampir tool numerical experiments presented case parareal converges rapidly latter correctly reproduces choptuik mass scaling law one iteration despite fact used coarse integrator alone generates strongly flawed mass scaling law underlines capability parareal quickly correct coarse method resolve dynamics problem results time parallel gravitational collapse simulation given illustrate parareal presumably methods well used improve utilization parallel computers numerical studies black hole formation multiple directions future research emerge presented results evaluating performance gains computing critical solution would valuable next complexer collapse scenarios system axial symmetry binary black hole spacetimes could addressed extended implementation parareal could utilize sophisticated convergence criterion flexible black hole detection parallelism space via parareal latter would possible integration along characteristic took represent space solution initial value problems like temporal direction another topic interest adaptive mesh refinement used efficiently connection parareal time parallel methods seems open problem discussed introduction mathematical analysis convergence behavior parareal einstein equations would great interest well particularly since good performance unexpected view negative theoretical results basic hyperbolic problems finally incorporating integration method software library widely used black hole numerical relativity simulations would ideal way make new approach available large group domain acknowledgments would like thank matthew choptuik university british columbia vancouver canada jonathan thornburg indiana university bloomington united states america providing feedback suggestions earlier version manuscript would also like thank piccinali gilles fourestey swiss national supercomputing center cscs lugano switzerland andrea arteaga swiss federal institute technology zurich ethz switzerland discussions concerning hardware cscs research funded deutsche forschungsgemeinschaft dfg part exasolvers project priority programme software exascale computing sppexa swiss national science foundation snsf lead agency agreement grant research also funded future swiss electrical infrastructure furies project swiss competence centers energy research sccer commission technology innovation cti switzerland references private communication jonathan thornburg indiana university bloomington united states america alcubierre introduction numerical relativity volume international series monographs physics oxford university press oxford edition jun isbn aubanel scheduling tasks parareal algorithm parallel computing mar doi baumgarte shapiro numerical relativity solving einstein equations computer cambridge university press cambridge edition aug isbn berger oliger adaptive mesh refinement hyperbolic partial differential equations journal computational physics mar doi url http berrut trefethen barycentric lagrange interpolation siam review sep doi chen hesthaven zhu use reduced basis methods accelerate stabilize parareal method quarteroni rozza editors reduced order methods modeling computational reduction volume modeling simulation applications pages springer international publishing url http copy library parareal method obtained cloning git repository https time parallel gravitational collapse simulation choptuik universality scaling gravitational collapse massless scalar field physical review letters jan doi choptuik hirschmann marsa new critical behavior collapse physical review nov doi url http christlieb macdonald ong parallel integrators siam journal scientific computing url http christodoulou bounded variation solutions spherically symmetric field equations communications pure applied mathematics doi dai maday stable parareal time method hyperbolic systems siam journal scientific computing url http emmett minion toward efficient parallel time method partial differential equations communications applied mathematics computational science url http falgout friedhoff kolev maclachlan schroder parallel time integration multigrid siam journal scientific computing url http fischer hecht maday parareal time approximation equations kornhuber editors domain decomposition methods science engineering volume lecture notes computational science engineering pages berlin springer url http floater hormann barycentric rational interpolation poles high rates approximation numerische mathematik aug doi gander analysis parareal algorithm applied hyperbolic problems using characteristics bol soc esp mat gander analysis parareal algorithm applied hyperbolic problems using charateristics sociedad aplicada mar url http path gander years time parallel time integration multiple shooting time domain decomposition springer url http gander petcu analysis krylov subspace enhanced parareal algorithm linear problem esaim url http gander vandewalle analysis parareal method siam journal scientific computing url http garfinkle choptuik scaling null coordinates physical review may doi url http gundlach critical phenomena gravitational collapse living reviews relativity dec doi url http hackbusch parabolic methods computing methods applied sciences engineering pages url http horton multigrid method communications applied numerical methods url http horton vandewalle worley algorithm polylog parallel complexity solving parabolic partial differential equations siam journal scientific computing url http husain critical behavior quantum gravitational collapse advanced science letters jun doi url http time parallel gravitational collapse simulation kidder scheel teukolsky carlson cook black hole evolution spectral methods physical review sep doi url http komatsu dunkley nolta bennett gold hinshaw jarosik larson limon page spergel halpern hill kogut meyer tucker weiland wollack wright wilkinson microwave anisotropy probe observations cosmological interpretation astrophysical journal supplement series feb doi url http kreienbuehl quantum cosmology polymer matter modified collapse phd thesis university new brunswick fredericton campus department mathematics statistics aug kreienbuehl husain seahra modified general relativity model quantum gravitational collapse classical quantum gravity may doi url http kreienbuehl naegel ruprecht speck wittum krause numerical simulation skin transport using parareal computing visualization science url http lions maday turinici parareal time discretization pde comptes rendus acadmie des sciences series mathematics url http loeffler faber bentivegna bode diener haas hinder mundim ott schnetter allen campanelli laguna einstein toolkit community computational infrastructure relativistic astrophysics classical quantum gravity may doi url http minion hybrid parareal spectral deferred corrections method communications applied mathematics computational science url http nielsen hesthaven fault tolerance parareal method proceedings acm workshop hpc extreme scale ftxs pages new york usa acm isbn doi url http nievergelt parallel methods integrating ordinary differential equations commun acm url http poisson relativist toolkit mathematics mechanics cambridge university press cambridge pretorius numerical simulations gravitational collapse phd thesis university british columbia pretorius evolution binary spacetimes physical review letters sep doi url http pretorius lehner adaptive mesh refinement characteristic codes journal computational physics doi url http ruprecht krause explicit integration linear system computers fluids url http speck ruprecht toward integration pfasst parallel computing pages doi url http speck ruprecht krause emmett minion winkel gibbon massively parallel solver proceedings international conference high performance computing networking storage analysis pages los alamitos usa ieee computer society press url http time parallel gravitational collapse simulation thornburg adaptive mesh refinement characteristic grids general relativity gravitation may doi url http ziprick kunstatter dynamical singularity resolution spherically symmetric black hole formation physical review jul doi url http
5
toward depth estimation using lensless cameras salman asif nov department electrical computer engineering university california riverside email sasif coded masks used demonstrate thin lensless camera flatcam mask placed immediately top bare image sensor paper present imaging model algorithm jointly estimate depth intensity information scene single multiple flatcams use light field representation model mapping scene onto sensor light rays different depths yield different modulation patterns present greedy depth pursuit algorithm search volume estimate depth intensity pixel within camera present simulation results analyze performance proposed model algorithm different flatcam settings ntroduction cameras standard vision sensors system records visual information however cameras bulky heavy size material lens shape camera also fixed form physical constraints placing lens certain distance sensor lensless camera potentially thin lightweight operate large spectral range provide extremely wide field view curved flexible shape recently new lensless imaging system called flatcam proposed flatcam consists coded binary mask placed small distance bare sensor flatcam viewed example coded aperture system mask placed extremely close sensor mask pattern selected way image formation model takes linear separable form image reconstruction sensor measurements requires solving linear inverse problem one limitation imaging model assumes scene consists single plane fixed distance camera paper present new imaging model flatcam scene consists multiple planes different unknown depths use light field representation light rays different depths yield different modulation patterns use lightfield representation analyze sensitivity flatcam sampling pattern depth mismatch present greedy algorithm jointly estimates depths intensity pixel present simulation results demonstrate performance algorithm different settings fig imaging lensless camera consists bare sensor fixed binary mask top every light source within camera casts shadow mask sensor resulting multiplexed image sensor shadow light source depends location respect assembly pursuit algorithm reconstructs image scene background related work pinhole camera classical example lensless camera opaque mask single pinhole placed front surface pinhole camera potentially take arbitrary shape however major drawback pinhole camera allows tiny fraction ambient light pass single pinhole therefore typically requires long exposure times coded aperture imaging systems extend idea pinhole camera using mask multiple pinholes however image formed sensor linear superposition images multiple pinholes need solve inverse problem recover underlying scene image sensor measurements primary purpose coded aperture increase amount light recorded sensor coded aperture system offers another advantage virtue encoding light different directions depths differently note bare sensor provide intensity light source spatial location mask front sensor encodes directional information source sensor measurements coded aperture system every light source scene casts unique shadow mask onto sensor therefore sensor measurements encode information locations intensities light sources scene consider single light source dark background image formed sensor shadow mask change angle light source mask shadow sensor shift furthermore change depth light source size shadow change see figure thus represent relationship pinhole mask sensor coded mask sensor pinhole mask sensor coded mask sensor fig examples imaging pinhole coded cameras light rays direction hit mask rays pass transparent regions holes pinhole cameras preserve angular information lose depth information points along angle yield identical images irrespective depths coded cameras record coded combination light different directions better preserve depth information points scene sensor measurements linear system depends pattern placement mask solve system using appropriate computational algorithm recover image scene imaging capability coded aperture systems known since pioneering work domain following excerpt summarizes well one reconstruct particular depth object treating picture formed aperture scaled size shadow produced depth however classical methods assume scene consists single plane known depth paper assume depth scene consists multiple depth planes true depth map unknown time reconstruction cameras traditionally used imaging wavelengths beyond visible spectrum xray imaging lenses mirrors expensive infeasible maskbased lensless designs proposed flexible selection compressive imaging using transmissive lcd panel separable coded masks recent years coded masks light modulators added cameras different configurations build novel imaging devices capture image depth light field single coded image light field imaging moving lensless cameras demonstrated imaging lensless camera recently demonstrated iii lat eplacing lenses coded masks computations flatcam coded aperture system consists bare planar sensor binary mask cameras traditionally used imaging wavelengths beyond visible spectrum imaging lenses mirrors expensive infeasible bare sensor provide information intensity light source spatial location adding mask front sensor encode directional information source sensor measurements imaging model assumes scene consists single plane parallel planes let consider imaging system shown fig mask placed distance front sensor array pixels scene consists single plane distance sensor scene pixels let denote scene pixel uniformly distributed along angular interval represent measurement sensor pixel denotes modulation coefficient mask light ray scene pixel sensor pixel location write compact form denotes system matrix maps scene pixels sensor pixels linear system solve using appropriate computational algorithm recover image scene details found epth estimation using lat ams imaging model model assumes scene consists single plane fixed depth system matrix encodes mapping scene points plane sensor pixels new model consider scene consists multiple planes contribute sensor measurements without loss generality consider imaging system fig assume sensor plane centered origin mask plane placed front distance scene consists planes depths scene pixels distributed uniformly along angles interval describe measurement sensor pixel denotes modulation coefficient mask light ray light source sensor pixel location write compact form denotes intensity pixels plane depth matrix represents mapping onto sensor case imaging let represent light distribution measurements sensor pixel located described using imaging system geometry mask sensor planes separated distance point scene angle distance contributes sensor measurements goal jointly estimate depth intensity pixel within lightfield representation system angle depth scene point encode intercept depth respective line lightfield horizontal lines denote mask patten lightfield first modulated mask pattern integrated sensor plane fig geometry imaging system corresponding light field representation following system linear equations mask sin sin mask denotes transparency value mask location within mask plane mask pattern symmetric separable space write measurements matrix denotes light distribution corresponding plane depth furthermore include multiple cameras different locations orientations respect reference frame system provide multiple sensor measurements form denotes sensor measurements camera matrix represents mapping onto camera joint image depth reconstruction estimate depth intensity pixel within cameras using greedy algorithm assume sparse prior nonzero value one depth proposed algorithm inspired structured sparse recovery algorithms modelbased compressive sensing simplify presentation let represent following general linear system light distribution spatial resolution depth planes denotes linear measurement operator denotes sensor measurements suppose current estimate exactly one depth assigned pixel let denote initial depth map experiments initialize depth estimate farthest plane scene proposed depthselective pursuit algorithm iterative method performs following three main steps every iteration compute proxy depth estimate first select new candidate depth pixel picking maximum magnitude following proxy map corresponding pixel let denote new depth map merge depths estimate image first merge original depth estimate proxy depth estimate let denote merged depth support solve problem merged depth support arg minl kit prune depth threshold image prune depth estimate every spatial location picking depth corresponding higher pixel intensity finally threshold one nonzero depth per spatial location depth sampling sensitivity single camera sensor measurements single point source location described sin mask lightfield expression note slope line corresponding light source inversely proportional depth light source moves farther assembly line would rotate around center light source moves along plane fixed depth line would shift slope therefore imaging model select depth planes given range sampling lightfield uniform angles results planes depths xperimental results validate performance proposed imaging model reconstruction algorithm performed extensive simulations different settings flatcam parameters first show results simple simulation scene consists three depth planes shown fig simulated imaging system binary mask placed distance sensor simulated voxel space ten depth planes within depth range chose ten depth planes uniformly sampling lightfield representation simulated scene spatial resolution mask binary random sequence sensor pixels generate sensor measurements assumed scene consists three planes chosen random ten fixed planes added small amount gaussian noise number imaging depth planes single camera three cameras table psnr comparison reconstructed images different number depth planes different number cameras test scene three cards placed three different depths picked depth planes random image reconstructed assuming scene consists single plane fixed depth psnr number tilted planes simulation results three camera system also summarized table example image reconstruction case shown fig eferences image reconstructed using algorithm jointly estimates depth intensity pixel psnr image reconstructed three cameras via algorithm jointly estimates depth intensity pixel psnr fig simulation results demonstrate effect depth sensitivity result joint depth intensity reconstruction algorithm true measurements reconstruct scene solved pursuit algorithm described previous section results presented fig denotes pixel intensities three planes scene denotes image reconstructed assuming pixels belong plane fixed depth denotes images reconstructed solving algorithm measurements next discuss experiment demonstrates robustness proposed model method mismatch locations original depth planes used reconstruction experiment simulated imaging system depth planes chosen uniformly lightfield representation selected three depth planes scene random calculated peak signal noise ratio psnr recovered images present psnr test averaged ten instances table see quality reconstruction remains almost increase number depth planes model computational complexity however slightly increases increase number depth planes finally present experiment simulated system three lensless cameras convex geometry one camera used reference generate depth planes scene two cameras see tilted planes field view advantage configuration depth information pixels converted angular information however configuration also makes strong assumption scene consists finite asif ayremlou sankaranarayanan veerarghavan baraniuk flatcam thin cameras using coded aperture computation ieee transactions computational imaging vol sept fenimore cannon coded aperture imaging uniformly redundant arrays applied optics vol dicke cameras gamma rays astrophysical journal vol cannon fenimore coded aperture imaging many holes make light work optical engineering vol durrant dallimore jupp ramsden application pinhole coded aperture imaging nuclear environment nuclear instruments methods physics research section accelerators spectrometers detectors associated equipment vol brady optical imaging spectroscopy john wiley sons busboom schotten uniformly redundant arrays experimental astronomy vol barrett horrigan fresnel zone plate imaging gamma rays theory appl vol nov online available http zomet nayar lensless imaging controllable aperture ieee computer society conference computer vision pattern recognition vol huang jiang matthews wilford lensless imaging compressive sensing ieee international conference image processing deweert farm lensless imaging separable masks optical engineering vol levin fergus durand freeman image depth conventional camera coded aperture acm transactions graphics tog vol acm veeraraghavan raskar agrawal mohan tumblin dappled photography mask enhanced cameras heterodyned light fields coded aperture refocusing acm transactions graphics tog vol marwah wetzstein bando raskar compressive light field photography using overcomplete dictionaries optimized projections acm transactions graphics tog vol zhang chen light field capturing lensless cameras ieee international conference image processing icip vol ieee antipa kuo heckel mildenhall bostan waller diffusercam lensless imaging arxiv preprint baraniuk cevher duarte hegde modelbased compressive sensing ieee transactions information theory vol
1
odular ontinual earning nified isual nvironment blue sheffer department psychology stanford university stanford bsheffer dec kevin feigelis department physics stanford neurosciences institute stanford university stanford feigelis daniel yamins departments psychology computer science stanford neurosciences institute stanford university stanford yamins bstract core aspect human intelligence ability learn new tasks quickly switch flexibly describe modular continual reinforcement learning paradigm inspired abilities first introduce visual interaction environment allows many types tasks unified single framework describe reward map prediction scheme learns new tasks robustly large state action spaces required environment investigate properties module architecture influence efficiency task learning showing module motif incorporating specific design principles early bottlenecks polynomial nonlinearities symmetry significantly outperforms standard neural network motifs needing fewer training examples fewer neurons achieve high levels performance finally present architecture task switching based dynamic neural voting scheme allows new modules use information learned previouslyseen tasks substantially improve learning efficiency ntroduction course everyday functioning people constantly faced environments required shift unpredictably multiple sometimes unfamiliar tasks botvinick cohen nonetheless able flexibly adapt existing decision schemas build new ones response challenges arbib humans support flexible learning task switching largely unknown neuroscientifically algorithmically wagner cole investigate solving problem neural module approach simple decision modules dynamically allocated top underlying sensory system andreas sensory system computes visual representation decision modules read sensory backbone large complex learned comparatively slowly significant amounts training data task modules deploy information base representation must contrast lightweight quick learned easy switch case tasks results neuroscience computer vision suggest role fixed general purpose visual representation may played ventral visual stream modeled deep convolutional neural network yamins dicarlo razavian however algorithmic basis efficiently learn dynamically deploy visual decision modules remains far obvious controller agent remap module fixed visual backbone touchstream environment policy figure modular continual learning touchstream environment touchstream environment gui continual learning agents spectrum visual reasoning tasks posed large unified action space timestep environment cyan box emits visual image reward agent recieves input emits action action represents touch location screen screen height width environment policy program computing function agent action history agent goal learn choose optimal actions maximize amount reward recieves time agent consists several component neural networks including fixed visual backbone yellow inset set learned neural modules grey inset red inset mediates deployment learned modules task solving modules use remap algorithm learn estimate reward function action heatmap conditional agent recent history using sampling policy reward map agent chooses optimal action maximize aggregate reward standard supervised learning often assumed output space problem prespecified manner happens fit task hand classification task discrete output fixed number classes might determined ahead time continuous estimation problem target might chosen instead convenient simplification supervised learning reinforcement learning contexts one interested learning deployment decision structures rich environment defining tasks many different natural output types simplification becomes cumbersome beyond limitation build unified environment many different tasks naturally embodied specifically model agent interacting touchscreenlike gui call touchstream tasks discrete categorization tasks continuous estimation problems many combinations variants thereof encoded using single common intuitive albeit large output space choice frees programmatically choose different output domain spaces forces confront core challenge naive agent quickly emergently learn implicit interfaces required solve different tasks introduce reward map prediction remap networks algorithm continual reinforcement learning able discover implicit interfaces large action spaces like touchstream environment address two major algorithmic challenges associated learning remap modules first module architectural motifs allow efficient task interface learning compare several candidate architectures show incorporating certain intuitive design principles early visual bottlenecks polynomial nonlinearities concatenations significantly outperform standard neural network motifs needing fewer training examples fewer neurons achieve high levels performance second system architectures effective switching tasks present architecture based dynamic neural voting scheme allowing new modules use information learned tasks substantially improve learning efficiency formalize touchstream environment introduce remap algorithm describe evaluate comparative performance multiple remap module architectures variety touchstream tasks describe dynamic neural voting evaluate ability efficiently transfer knowledge remap modules task switches mts localization image sample match sample match reward map figure exemplar touchstream tasks illustration several task paradigms explored work using touchstream environment top row depicts observation bottom shows ground truth reward maps red indicating high reward blue indicating low reward binary task stereotyped task task using dataset object localization elated ork modern deep convolutional neural networks significant impact computer vision artificial intelligence krizhevsky well computational neuroscience vision yamins dicarlo recent growing literature neural modules used solving compositional visual reasoning tasks andreas work apply idea modules solving visual learning challenges continual learning context existing works rely choosing menu module primitives using different module types solve subproblems involving specific datatypes without addressing modules forms discovered first place paper show single generic module architecture capable automatically learning solve wide variety different tasks unified space simple controller scheme able switch modules results also closely connected literature lifelong continual learning kirkpatrick rusu part literature concerned learning solve new tasks without catastrophically forgetting solve old ones zenke kirkpatrick use modules obviates problem instead shifts hard question one modules learned effectively continual learning literature also directly addresses knowlege transfer newly allocated structures chen rusu fernando largely addresses transfer learning lead higher performance rather addressing improve learning speed aside reward performance focus issues speed learning task switching motivated remarkably efficient adaptability humans new task contexts existing work continual learning also largely address specific architecture types learn tasks efficiently independent transfer focusing first identifying architectures achieve high performance quickly individual tasks investigation naturally focuses efficiently identify components architectures works also make explicit priori assumptions structure tasks encoded models output type number classes rather address general question emergence solutions embodied case learning approaches wang duan well schema learning ideas arbib mcclelland typically seek address issue continual learning complex extract correlations tasks long timescale context burden environment learning placed individual modules thus comparatively compared typical approaches unlike case mostly limited small state action spaces recent work general reinforcement learning ostrovski addressed issue large action spaces sought address multitask transfer learning large action spaces ouch tream nvironment agents environment exposed many different implicit tasks arising without predefined decision structures must learn fly appropriate decision interfaces situation interested modeling agents learning task environment mimic unconstrained nature real world describe touchstream environment attempts simplified domain problem setup consists two components environment agent interacting extended temporal sequence fig timestep environment emits rgb image height width scalar reward conversely agent accepts images rewards input chooses action response action space available agent consists pixel grid height width input image environment equipped policy unknown agent time step computes image reward function history agent actions images rewards work agent neural network composed visual backbone fixed weights together module whose parameters learned interaction environment agent goal learn enact policy maximizes reward obtained time unlike episodic reinforcement learning context touchstream environment continuous throughout course learning agent never signaled reset initial internal state however unlike traditional continuous learning context sutton barto touchstream may implicitly define many different tasks associated characteristic reward schedule agent experiences continual stream tasks implicit association reward schedule state reset must discovered agent framing action space agent possible pixel locations state space arbitrary image wide range possible tasks unified single framework cost requiring agents action space congruent input state space thus quite large presents two core efficiency challenges agent given task must able quickly recognize interface task transfer knowledge across tasks smart way goals complicated fact large size agent state action spaces although work modern computer datasets tasks work imagenet deng lin also inspired visual psychology neuroscience pioneered techniques controlled visual tasks embodied real reinforcement learning paradigms horner rajalingham especially useful three classes task paradigms span range ways discrete continuous estimation tasks formulated including localization tasks fig tasks paradigm common approach physically embodying discrete categorization tasks gaffan harrison example simple discrimination task shown fig agent rewarded touches left half screen shown image dog right half shown butterfly tasks made difficult increasing number image classes complexity reward boundary regions experiments use images classes imagenet dataset deng tasks mts paradigm another common approach assessing visual categorization abilities murray mishkin mts task shown fig trials consist sequence two image frames sample screen followed match screen agent expected remember object category seen sample frame select onscreen button really patch pixels match screen corresponding sample screen category unlike tasks mts tasks require working memory localized spatial control complex mts tasks involve sophisticated relationships sample match screen fig using object detection challenge dataset lin sample screen shows isolated template image indicating one classes match screen shows scene dataset containing least one instance class agent rewarded chosen action located inside boundary instance agent pokes inside correct class mts task hybrid categorical continuous elements meaning phrased standard supervised learning problem categorical readout class identity continous readout object location would required localization fig shows continuous localization task agent supposed mark bounding box object touching opposite corners two successive timesteps reward proportionate intersection union iou value predicted bounding box relative ground truth bounding box iou area localization unlike area bgt mts paradigms choice made one timestep constrains agent optimal choice future timestep picking upper left corner bounding box first step contrains lower right opposite corner chosen second although tasks become arbitrarily complex along certain axes tasks presented require memory future prediction task requires knowledge past timesteps perfect solution always exists within timesteps point minimal required values different across various tasks work however investigations set maximum required values across tasks thus agent required learn safe ignore information past irrelevant predict past certain point future begin considering restricted case environment runs one semantic task indefinitely showing different architectures learn solve individual tasks dramatically different levels efficiency expand considering case environment policy consists sequence tasks unpredictable transitions tasks exhibit cope effectively expanded domain eward rediction touchstream environment necessarily involves working large action state spaces methods handling situation often focus reducing effective size spaces either via estimating pairs clustering actions ostrovski take another approach using neural network directly approximate modulated mapping action space reward space allowing learnable regularities interaction implicitly reduce large spaces something manageable simple choice policies introduce algorithm efficient multitask reinforcement learning large action state spaces reward map prediction remap etwork lgorithm standard reinforcement learning situation agent seeks learn optimal policy defining probability density actions given image state remap algorithm calculated simple fixed function estimated reward remap network neural network parameters whose inputs history previous timesteps agent actions activation encoding agent state space explicitly approximates expected reward map across action space number future timesteps mathematically number previous timesteps considered length future horizon considered history state space encodings produced fixed backbone network history previously chosen actions map map action space reward space predicted reward maps constructed computing expected reward obtained subsample actions drawn randomly predicted reward steps future horizon produced reward prediction maps one timestep future horizon agent needs determine believes single best action expected reward maps remap algorithm formulates normalizing predictions across maps separate probability distributions sampling action distribution maximum variance agent computes policy follows dist norm mjt orm min normalization removes minimum map dist ensures probability distribution parameterized functional family varargmax operator chooses input largest variance sampling procedure described equation uses two complementary ideas exploit spatial temporal structure efficiently explore large action space since rewards real physical tasks spatially correlated sampler equation allows effective exploration potentially informative actions would estimate apparent optimum policy order reduce uncertainty remap algorithm explores timesteps greatest reward map variance varargmax function nonlinearly upweights timeframe highest variance exploit fact points time carry disproportianate relevance reward outcome somewhat analagously operates convolutional networks although standard action selection strategy used place one pseudo maps empirically found policy effective efficiently exploring large action space parameters remap network learned gradient descent loss reward prediction error map mjt compared true reward reward prediction corresponding action chosen timestep participates loss calculation backpropagation error signals minibatch maps rewards actions collected several consecutive inference passes performing parameter update remap algorithm summarized algorithm remap reward map prediction initialize remap network initialize state action memory buffers timestep observe encode state space network append state buffer subsample set potential action choices uniformly produce expected reward maps select action according policy execute action environment store action buffer receive reward calculate loss previous timesteps mod batch size perform parameter update throughout work take fixed backbone state space encoder convnet pretrained imagenet simonyan zisserman resolution input network pixels action space default functional family used action selection scheme identity although tasks benefiting high action precision localization mts often optimal sample boltzmann distribution reward prediction errors calculated using loss logits smooth approximations heaviside function analogy fficient eural odules task earning main question seek address section specific neural network structure used remap modules key considerations modules easy learn requiring comparatively training examples discover optimal parameters easy learn meaning agent quickly build new module reusing components old ones intuitive example example consider case simple binary stimulusresponse task fig see dog touch right butterfly touch left one decision module perfect reward predictor task expressed analytically relu relu relu relu heaviside function components action relative center screen matrix expressing class boundary bias term omitted clarity positive image dog must also positive touch right predict positive reward conversly negative butterfly must negative left touch predict reward neither conditions hold terms equal zero formula predicts reward since vertical location action affect reward involved reward calculation task equation three basic ideas embedded structure early visual bottleneck general purpose feature representation greatly reduced dimension case features vgg layer prior combination action space multiplicative interaction action vector bottlenecked visual features symmetry first term formula partner second term reflecting something spatial structure task next sections show three principles generalized parameterized family networks visual bottleneck parameters decision structure form equation emerge naturally efficienty via learning given task interest ems odule section define generic remap module lightweight encodes three generic design principles perfect formula uses small number learnable parameters define concatenated square nonlinearity concatenated relu nonlinearity shang crelu relu relu denotes vector concatenation cres nonlinearity defined composition crelu cres relu relu cres nonlinearity introduces multiplicative interactions arguments via component symmetry via use crelu definition ems module remap module given crelu cres cres learnable parameters features fixed visual encoding network action vector sample match reward maps training episode thousands figure decision interfaces emerge naturally course training remap modules allow agent discover implicit interfaces task observe learning generally first captures emergence natural physical constructs learning decision rules examples include onscreen buttons appearing match screen mts task specific semantic meaning button learned arrows indicate random motion general discovery objects boundaries category rule applied image best viewed color ems structure builds three principles described stage represents early bottleneck visual encoding inputs bottlenecked size combined actions performs cres stages introducing multiplicative symmetric interactions visual features actions perfect module definition binary task becomes special case ems module note visual features bottlenecked encoder practice work fully connected convolutional features backbone experiments follow compare ems module wide variety alternative control motifs early bottleneck multiplicative symmetric features ablated multiplicative nonlinearity bottleneck ablations use spectrum standard activation functions including relu tanh sigmoid elu clevert crelu forms late bottleneck architectures effectively standard perceptrons mlps action vectors concatenated directly output visual encoder passed subsequent stages test distinct architectures detailed information found supplement xperiments compared architecture across variants visual mts localization tasks using fixed visual encoding features layer task variants ranged complexity simple binary task imagenet categories challenging imagenet mts task result buttons appearing varying positions trial complex tasks two variants localization either single main salient object placed complex background similar images used yamins dicarlo complex scenes see fig details tasks used experiments found supplement module weights initialized using normal distribution optimized using adam algorithm kingma parameters learning rates optimized basis fashion architecture task ran optimizations five different initialization seeds obtain mean standard error due initial condition variability modules measured performance modules three different sizes small medium large smallest version equivalent size ems module medium large versions much larger table emergence decision structures key feature remap modules able discover novo underlying output domain spaces variety qualitatively distinct tasks fig examples fig emergent decision structures highly interpretable reflect true interfaces environment implicitly defines spatiotemporal patterns learning robust across tasks replicable across initial seedings thus might serve candidate model interface use learning humans general observe modules typically discover reward reward training episodes thousands module ems symm mult none large none medium none small training episodes thousands figure ems modules components efficient visual learning system validation reward obtained course training modules reward map split four quadrants mts randomly moving match templates mts two randomly moving class templates shown time mts four randomly positioned images shown time lines indicate mean reward five different weight initializations clarity seven total tested architectures displayed see results remaining architectures supplement metric area learning curve normalized highest performing module within task modules averaged across tasks error values standard deviations mean underlying physical structures needed operate task interface learning specific decision rules needed solve task example case discrete mts categorization task fig involves quick discovery onscreen buttons corresponding discrete action choices buttons mapped semantic meaning case mts task fig observe initial discovery high salience object boundaries followed refinement important note visual backbone trained categorization task quite distinct localization task mts thus module learn different decision structure well class boundaries scratch training efficiency ems module efficiency learning measured computing taskaveraged normalized area learning curve modules tested across task variants fig shows characteristic learning curves several tasks summarized table fig results architectures tasks shown supplement figure find ems module efficient across tasks moreover ems architecture always achieves highest final reward level task increasing ablations ems structure lead increasingly poor performance terms learning efficiency final performance ablating polynomial interaction replacing crelu largest negative effect performance followed importance symmetric structure large models bottleneck using relu activations performed significantly worse smaller ems module single ablations better module neither symmetry multiplicative interactions small modules number parameters ems far least efficient oftentimes achieved much lower final reward summary main conceptual features architecture solves binary task individually helpful combine usefully parameterized efficiently learned variety visual tasks properties critical achieving effective task learning compared standard mlp structures second experiment focusing localization tasks tested ems module using convolutional features fixed feature encoder reasoning localization tasks could benefit finer spatial feature resolution find using visual features explicit spatial information convolutional ems ems symmetry multiplication none large none medium none small svm baseline reward iou training episodes thousands training episodes thousands figure convolutional bottlenecks allow fine resolution localization detection complex scenes mean intersection union iou obtained localization task reward obtained variant require visual systems accomodate finer spatial resolution understanding scene precise action placement mts tasks convolutional variant ems module uses skip connections layers input whereas standard ems uses layer input substantially improves task performance learning efficiency tasks fig knowledge results first demonstrated use reinforcement learning achieve object segmentations reward curves measuring bounding box iou fig show little difference late bottleneck modules size models consistently achieve iou variants especially convolutional features context baseline svr trained using supervised methods directly regress bounding boxes using vgg features results iou dynamic eural voting task witching far considered case touchstream consists one task however agents real environments often faced switch tasks many may encountering first time ideally agents would repurpose knowledge previously learned tasks relevant new task formally consider environment policies consisting sequences tasks may last indeterminate period time consider also set modules module corresponds policy new task begins cue agent allocate new module added set modules learning follows allocation weights old modules held fixed parameters new module trained however output system merely output new module instead dynamically allocated mixture pathways computation graphs old new modules mixture determined fig neural network learns dynamic distribution parts modules used building composite execution graph intuitively composite graph composed small number relevant pathways mix match parts existing modules solve new task potentially combination new module components need learned dynamic eural voting define assigns weights layer module let weight associatedp ith layer module weights probabilistic per layer basis interpreted probability controller selecting ith layer use execution graph distribution assignment weights composite execution graph defined generated computing sum activations components layer weighted probabilities values passed next layer process repeats mathematically composite layer stage expressed controller visual input composite layer fixed parameters figure dynamic neural voting controller dynamic neural voting solves new tasks computing composite execution graph previously learned newly allocated modules shown agent two existing modules yellow green one newly allocated module learned blue layer controller takes input activations three modules outputs set voting results probabilistic weights used scale activations within corresponding components voting done either basis basis clarity layer voting method depicted weighted sum three scaled outputs used input next stage computation graph new task solved combination existing module components weighted highly new module effectively unused assigned low weights however task quite different previously solved tasks new module play larger role execution graph learns solve task operator computes ith layer module original encoded input state question probabilistic weights come core procedure dynamic neural voting process controller network learns boltzmann distribution module activations maximize reward prediction accuracy process performed module layer module weightings given layer conditioned results voting previous layer softmax module weights layer concatenation learnable weight matrix controller voting procedure operates online fashion controller continously learning agent taking actions defined constitutes neural network learned gradient descent online mechanism involves voting across units specifically useful refinement assigns probabilistic weights neuron jth unit layer module contrast scheme dynamically generated execution graph computed meta controller becomes composite neurons activations concatenated form composite layer generalization equation voting scheme becomes softmax weights across modules empirically find initialization schemes learnable controller parameters important consideration design two specialized transformations also contribute slightly overall efficiency details please refer supplement dynamic neural voting mechanism achieves neural network optimized online via gradient descent modules solving tasks rather genetic algorithm operates longer timescale work fernando moreover contrast work rusu voting mechanism eliminates need adaptation layers modules thus substantially reducing number parameters required transfer reward witching xperiments scratch switch reuse fraction batch updates figure dyanmic neural voting quickly corrects switches although new module allocated task transition new task identitcal original task controller quickly learns reuse old module components top postswitching learning curve ems module binary task trained task clarity layer voting method compared baseline module trained scratch bottom fraction original module reused course learning calculated averaging voting weights layer original module switches first experiments tested dynamic neural voting mechanism would respond switches ones although switch cue given new module allocated environment policy task actually change fig find cases performance almost instantly approaches levels little penalty attempting uneccessary switch moreover find weightings controller applies new module low words system recognizes new module needed acts accordingly concentrating weights existing module results show formally assume agent cued task switches occurs theory could implement completely autonomous monitoring policy agent simply runs allocation procedure performance anomoly occurs sustained drop reward system determines new module unneeded could simply reallocate new module later task switch future work plan implement policy explicitly real switches next tested dynamic voting controller handled switches environment policy substantially changed switching cue using ems module control large module described dynamic neural voting controller evaluated switching experiments using multiple variants mts tasks specifically switches cover variety distinct mutually exclusive switching types including addition new classes dataset switch indexes table fig replacing current class set entirely new class set switch ids addition visual variability previously less variable task switch addition visual interface elements new buttons switch transformation interface elements screen rotation switch ids transitions different task paradigms mts tasks switch ids controller hyperparameters optimized fashion see appendix optimizations three different initialization seeds run obtain mean standard error figures show characteristic learning curves ems module layer voting voting methods additional switching curves found supplement cumulative reward gains relative learning scratch quantified relative switch gain auc rgain module trained scratch reward ems voting layer voting ems layer voting voting relative auc gain layer voting voting none large transfer gain base task stationary mts stationary mts stationary mts mts mts switch task new classes stationary mts new classes stationary mts mts mts permuted mts base task stationary mts switch task stationary mts quadrant quadrant quadrant class reversal squeezed map map rotation task switching dynamic neural voting learning curves ems module quadrant task learning task mts task match screen class templates layer voting method voting method compared baseline module trained second task scratch across twelve task switches evaluate relative gain auc baseline rgain using voting methods ems module late bottleneck mlp transfer gain tgain metrics compared module types voting mechanisms colors ems module module figure second task switch module transferred initial task using dynamic voting controller find dynamic voting controller allows rapid positive transfer module types across task switches general voting method somewhat better transfer mechanism layer voting method fig ems module large module shown inefficient performance benefit dynamic neural voting fig ems modules switchable quantify fast switching gains realized use transfer gain gain argmax time maximum max amount reward difference switch occurs reward difference time qualitatively high score transfer gain metric indicates large amount relative reward improvement achieved short amount time see figure graphical illustration relationship rgain gain metrics ems large modules positive transfer gain ems scores significantly higher metric significantly switchable large module fig hypothesize due ems module able achieve high task performance significantly fewer units larger module making former easier dynamic neural voting controller operate onclusion uture irections work introduce touchstream environment continual reinforcement learning framework unifies wide variety spatial tasks within single context describe general algorithm remap learning neural modules discover implicit task interfaces within environment show particular module architecture ems able remain compact retaining high task performance thus especially suitable flexible task learning switching also describe simple general dynamic architecture shows substantial ability transfer knowledge modules new tasks learned crucial future direction expand insights current work complete agent need show approach scales handle dozens hundreds task switches sequence also need address issues agent determines build new module consolidate modules appropriate series tasks previously understood separate solved single smaller structure also critical extend approach handle visual tasks longer horizons navigation game play extended strategic planning likely require use recurrent memory stores part feature encoder application point view particularly interested using techniques like described produce agents autonomously discover operate interfaces present many important problem domains smartphones internet grossman also expect many techniques enable modules perform well touchstream environment also transfer naturally context autonomous robotics applications devin compelling eferences jacob andreas marcus rohrbach trevor darrell dan klein deep compositional question answering neural module networks corr url http michael arbib schema theory encyclopedia artificial intelligence matthew botvinick jonathan cohen computational neural basis cognitive control charted territory new frontiers cognitive science tianqi chen ian goodfellow jonathon shlens accelerating learning via knowledge transfer arxiv preprint clevert thomas unterthiner sepp hochreiter fast accurate deep network learning exponential linear units elus corr url http michael cole jeremy reynolds jonathan power grega repovs alan anticevic todd braver connectivity reveals flexible hubs adaptive task control nat sep deng dong socher imagenet hierarchical image database ieee cvpr coline devin abhishek gupta trevor darrell pieter abbeel sergey levine learning modular neural network policies transfer corr url http yan duan john schulman chen peter bartlett ilya sutskever pieter abbeel fast reinforcement learning via slow reinforcement learning corr url http gabriel richard evans peter sunehag ben coppin reinforcement learning large discrete action spaces corr url http chrisantha fernando dylan banarse charles blundell yori zwols david andrei rusu alexander pritzel daan wierstra pathnet evolution channels gradient descent super neural networks corr url http david gaffan susan harrison disconnection fornix transection visuomotor conditional learning monkeys behavioural brain research issn doi https url http lev grossman invention year iphone time magazine online alexa horner christopher heath martha brianne kent chi hun kim simon nilsson johan charlotte oomen andrew holmes lisa saksida touchscreen operant platform testing learning memory rats mice nature protocols andreas rohrbach darrell saenko learning reason module networks visual question answering arxiv april diederik kingma jimmy adam method stochastic optimization url http corr james kirkpatrick razvan pascanu neil rabinowitz joel veness guillaume desjardins andrei rusu kieran milan john quan tiago ramalho agnieszka demis hassabis claudia clopath dharshan kumaran raia hadsell overcoming catastrophic forgetting neural networks corr url http krizhevsky sutskever hinton imagenet classification deep convolutional neural networks advances neural information processing systems lin michael maire serge belongie lubomir bourdev ross girshick james hays pietro perona deva ramanan piotr lawrence zitnick microsoft coco common objects context corr url http james mcclelland incorporating rapid neocortical learning new information complementary learning systems theory journal experimental psychology general elisabeth murray mortimer mishkin object recognition location memory monkeys excitotoxic lesions amygdala hippocampus journal neuroscience issn url http georg ostrovski marc bellemare van den oord munos exploration neural density models corr url http rajalingham schmidt dicarlo comparison object recognition behavior human monkey ali razavian hossein azizpour josephine sullivan stefan carlsson cnn features astounding baseline recognition computer vision pattern recognition workshops cvprw ieee conference ieee andrei rusu neil rabinowitz guillaume desjardins hubert soyer james kirkpatrick koray kavukcuoglu razvan pascanu raia hadsell progressive neural networks corr url http wenling shang kihyuk sohn diogo almeida honglak lee understanding improving convolutional neural networks via concatenated rectified linear units corr url http karen simonyan andrew zisserman deep convolutional networks image recognition arxiv preprint richard sutton andrew barto introduction reinforcement learning mit press cambridge usa edition isbn anthony wagner daniel schacter michael rotte wilma koutstaal anat maril anders dale bruce rosen randy buckner building memories remembering forgetting verbal experiences predicted brain activity science jane wang zeb dhruva tirumala hubert soyer joel leibo munos charles blundell dharshan kumaran matt botvinick learning reinforcement learn corr url http yamins hong cadieu solomon seibert dicarlo hierarchical models predict neural responses higher visual cortex proceedings national academy sciences daniel yamins james dicarlo using deep learning models understand sensory cortex nature neuroscience friedemann zenke ben poole surya ganguli improved multitask learning synaptic intelligence arxiv preprint upplementary aterial task variants ems module ablation controls evaluated suite localization mts variants standard binary task double binary four class variant class assigned either right left half action space quadrant four class variant class assigned quadrant action space stationary mts standard binary mts task stereotyped match screens stationary mts two class variant mts task match templates horizontal placement randomly chosen confined within vertical plane stationary mts two class variant mts task match templates vertical position randomly chosen class confined specific side stationary mts two class variant mts task match templates positions completely random mts four class variant mts task two class templates shown match screen appearing random horizontal location well mts random vertical motion templates stationary mts four class variant mts task four class templates shown match screen fixed positions permuted mts randomly permuted locations match templates localization localization task mts mts task using detection challenge dataset match screens randomly samples scenes dataset xperiment details datasets experiment details image categories used drawn ilsvr classification challenge dataset deng four unique object classes taken dataset boston terrier monarch butterfly race car panda bear class unique training instances unique validation instances experiment details sample screen images drawn class set tasks one unobstructed class instance also drawn classification challenge set used match screen template image class class template images match screen held fixed pixels variants mts task keep six pixel buffer edges screen match images twelve pixel buffer adjascent edges match images variants without vertical motion match images vertically centered screen localization experiment details localization task uses synthetic images containing single main salient object placed complex background similar images used yamins dicarlo yamins total unique classes dataset contrast localization datasets designed one large object instance trivial policy always poke image corners could learned synthetic image set offers larger variance instance scale position rotation agent forced learning policies requiring larger precision action selection mts experiment details task uses entire detection challenge dataset lin every timestep sample screen chosen one classes constructed large unobstructed face centered representations class match screen sample random scene containing number objects containing least single instance sample class agent rewarded action located inside instance correct class modules use sample actions boltzmann policy empirically found result precise reward map prediction odules nits ayer table aggregates number units per layer ems ablated modules used conducting experiments modules layer sizes shown details convolutional bottleneck ems module please refer table number units per layer investigated modules ems mts loc symm mult small med large normalized validation auc double binary quadrant stationary mts vert motion mts horiz flip mts mts mts mts stationary mts permuted mts rtia sym sym atia non mult rel rge non elu larg non non rge rel ediu mult mult non ediu non elu dium non rel mall non larg mult non sig larg mult non elu non mall non sig dium non med ium non sma non sig localization figure exhaustive module performance study ems module ablation control modules measured area curve mts loc task variants shown auc normalized highest performing module task results fig averaged vertical task axis report salient subset ablations onvolutional odule convolutional bottleneck extension ems module shown paper skip connections link representation visual backbone scenelevel representation stored remap memory buffer tiled spatially match present convolution dimensions concatenated onto channel dimension series convolutions plays role shallow visual bottleneck activations vectorized concatenated input cres layers standard ems module results paper shown bottleneck consisting single tanh two cres convolutions units downstream layers use units well motivation convolutional bottleneck features useful complex spatial tasks localization object detection hence may result precise policy tiling entire representation along convolution layer channel dimension form multiplicative possible objects must memorized mts templates inside present scene xhaustive blation tudy investigated distinct ablations ems module across twelve task variants outlined sec fig symmetry ablations replace cres activation relu multiplicative ablations denoted specifying nonlinearity used place cres one relu tanh sigmoid elu clevert crelu shang additionally includes one partial symmetry ablation denoted partial symm visual bottleneck symmetric one ablates relu symm module denoted table module learning rates ems partial symm symm mult relu tanh sig elu crelu none relu small none relu medium none relu large none tanh small none tanh medium none tanh large none sig small none sig medium none sig large none elu small none elu medium none elu large none crelu small none crelu medium none crelu large double binary stationary stationary mts vertmotion mts horiz flip mts mts mts vertmotion mts stationary mts permuted mts loc yperparameters learning rates adam optimizer chosen basis grid architecture values used present study may seen table dditional eward aps five additional reward map examples mts task provided figure examples plotted course learning dditional earning urves ingle blation xperiments learning trajectories seven additional tasks provided figure modules capable convergence task run acheived auc values given task calculated point time majority models converge dynamic voting ontroller ems module task witching xperiments additional trajectories ten unshown switching curves provided figure dynamic voting ontroller augmentations earnable parameter nitializations describe weight initialization scheme found optimal use dynamic voting controller simplicity consider mechanism learnable sample match reward maps training episode thousands figure examples emergence decision interfaces mscoco mts reward map predictions course training different object classes reward double binary stationary mts mts mts mts stationary mts ems symm mult none large none medium none small training episodes figure additional performance ablation learning curves seven learning curves shown task variants seen main text body shown ablations main text double binary stationary mts stationary mts new classes stationary mts stationary mts stationary mts mts mts mts mts permuted mts double binary stationary mts double binary quadrant reward new classes ems voting layer voting training episodes weight matricies biases intended biasing scheme achieved initializing elements parameters initialization technique also generalized use voting mechanism switching experiments presented section sweep hyperparameters narrow band around default scheme ranges targeted ransformations two additional switching mechanisms added controller augment ability switch taks remappings action space reward policy preexisting module action ransformations note efficient modules effectively produce minimal representation interaction action space observation agent optimal action space shifts remainder task context remains fixed controller allow rapid targeted remapping since formulate modules remap networks input feature basis achieve remappings form transformation vector action histories embed new action space using small number learnable parameters pseudo transformation practice initialize parameters transformation pseudo meaning representation learned level original module destroyed prior transfer done initializing identity matrix small amount gaussian noise added break symmetry initialized vector ones size eward ransformations maps reflects agent uncertainty environment reward policy task context remains stationary environment transitions new reward schedule longer aligns module policy controller could transition containing mechanism allowing targeted transformation hence also one complication arises remap since learns optimal action space internally basis rather therefore transformations map distribution must also mapping work investigate shallow adapter neural network lives top existing module maps first second layers defined similar transformation denotes elementwise multiplication learnable matrix embedding hidden state learnable matrix embedding pseudo transformation similar transformation action space modify transformation pseudo well done modifying original maps concatenated beginning transformation input vector intended transformation accomplished via initializing otherwise targeted ransformation yperparameters targeted transformations several hyperparameters conducted grid search optimize fashion set test task switches designed solved one targeted transformations fig conducted independently dynamic voting controller independently transformation optimal hyperparameters found experiments fixed use integrated dynamic voting controller optimized afterwards action transformation hyperparameters conducted three tests using paradigm class reversal left class becomes right class horizontal rotation reward boundaries right becomes left becomes switch original task intended test component work find single linear transformation optimal new embedding using units initialized transformation weights learning rate transformation found optimal reward map transformation hyperparameters conducted two tests using stimulusresponse paradigm squeezing task longer reward dispensed lower half screen switch original task intended test component work find optimal activations cres relu units hidden layer weight initialization scheme found optimal initial bias optimal learning rate transformation found ransform blation study conducted determine relative benefit targeted transformations fig determined primary contribution dynamic neural controller fact voting mechanism although transformations supplement well eployment scheme task modules cued task transition controller freezes learnable parameters old deploys new unitialized controller initializes action reward map transformation networks described top old module transformations also voted inside dynamic neural controller every timestep witching etrics figure graphically illustrates metrics used inside paper quantify switching performance rgain gain environment reward policy reversal image task switch reward map training trials environment reward policy rotation image task switch reward map training trials environment reward policy squeeze image task switch reward map training trials figure action reward map transformation switch examples three task switching experiments performed optimize hyperparameters targeted transformations augment dynamic neural voting controller switches also retested shown original switching result figure binary class reversals left class becomes right class rotations binary reward boundaries squeezing binary reward boundaries reward given new task bottom half screen regardless class shown relative auc gain figure targeted controller transform ablation relative auc gain ems module switching snearios paper targeted transformations ablated reward tgain max rgain auc auc max training episodes figure illustration switching performance metrics quantify switching performance dynamic neural controller two metrics relative gain auc ratio green purple shaded regions transfer gain difference reward relative auc measures overall gain relative scratch transfer gain measures speed transfer curve shown ems module voting method evaluated switch mts task two randomly moving class templates mts task four randomly moving templates
2
smooth backfitting proportional hazards new approach projecting survival data munir hiabu cass business school city university london jul enno mammen heidelberg university germany mammen dolores cass business school city university london jens perch nielsen cass business school city university london summary smooth backfitting proven number theoretical practical advantages structured regression smooth backfitting projects data onto structured space interest providing direct link data estimator paper introduces ideas smooth backfitting survival analysis proportional hazard model assume underlying conditional hazard multiplicative components develop asymptotic theory estimator use smooth backfitter practical application extend recent advances forecasting methodology allowing information incorporated still obeying structured requirements forecasting keywords aalen multiplicative model local linear kernel estimation survival data forecasting introduction purely unconstrained nonparametric models suffer curse dimensionality high dimensional data spaces often structure introduced stabilize system allow visualize interpret extrapolate forecast properties underlying data smooth backfitting algorithm mammen considered simplest nonparametric structure regression context additive structure successful update kernel smoothing regression backfitting algorithms many theoretical practical advantages earlier approaches regression backfitting still day popular regression backfitting algorithms hastie tibshirani numerical estimating one component given estimates hiabu rest contrast smooth backfitter direct projection data onto structured space interest direct relationship data estimates gives solid grip estimated theoretical properties underlying see also nielsen sperlich purpose paper introduce smooth backfitting field survival analysis nonparametric smooth hazard estimation additive structure natural widely used regression multiplicative structure seems natural hazard estimation omnipresent cox regression model proportional hazard model many extensions alternatives cox hazard model formulated multiplicative framework therefore chosen multiplicative hazard structure natural place start introducing smooth backfitting survival analysis smooth multiplicative backfitting theoretically challenging additive smooth backfitting smooth backfitting multiplicative regression structure analysed detail special case general additive models proved multiplicative structure contrast simpler additive regression models provides asymptotic theory number interactions exposure available different directions naturally asymptotics provided smooth backfitting multiplicative hazards contain similar interactive components asymptotic theory able provide simple algorithm first projecting data onto unconstrained estimator projecting unconstrained estimator onto multiplicative space interest numerical algorithms greatly simplified new principle weighting projection according final estimates let covariate process observed long observed object exposure interested conditional hazard nonnegative random variable lim assume unknown smooth function depending time value covariate time point many cases might subject filtering filtered observations present vast variety topics including right censoring experimental studies like clinical trials left truncation insurance loss data first version model introduced beran author considered time independent covariates filtering scheme right censoring dabrowska showed weak convergence estimator presented general model time dependent covariates also general filtering patterns analysed mckeague utikal nielsen linton part counting process model estimator nielsen linton identified natural local constant estimator context also generalized local linear estimator nielsen observe independent identically distributed copies process predictable process counting process intensity smooth backfitting proportional hazards multiplicative intensity assumption counting process known aalen multiplicative intensity model andersen give comprehensive overview embed various survival data models counting process formulation section show embed left truncation right censoring approaches like attractive minimal assumptions underlying model compared example fully parametric approach however estimation accuracy decreases rapidly number dimensions also known curse dimensionality weakness overcome introducing assumptions separable structures underlying hazard see also stone paper assume conditional hazard multiplicative algorithms kernel smoothing provided hastie tibshirani fan recently lin kernel smoothing within model framework first analysed filtered survival context linton based approach principle marginal integration linton nielsen marginal integration however requires rectangular support data data example paper taken insurance challenge estimating outstanding liabilities data triangular support approach linton therefore feasible show section smooth projection approach directly links data underlying multiplicative structure also works triangle support data example requires section underlying survival model set section first pointed unconstrained multidimensional hazard estimators considered ratios smooth occurrence smooth exposure unlike regression local constant estimator exhibits simple structure secondly smooth backfitting estimator defined projection unconstrained hazard estimator enjoys simple ratio structure section asymptotic properties given smooth backfitting estimator defined section general sufficient conditions given asymptotic properties unconstrained smooth occurrence unconstrained smooth exposure smooth backfitter based section consider sophisticated version forecasting enabled new smooth backfitter introduce smooth extension popular actuarial chain ladder model forecasting possible imposed multiplicative structure concluding section point multiplicative hazard estimation natural place start hazard structures might interesting consider future martinussen scheike example consider rich class additive combined additive multiplicative structures could interesting explore future work proofs deferred appendix aalen multiplicative intensity model consider aalen multiplicative intensity model allows general observations schemes covers filtered observations arising left truncation right censoring also complicated patterns occurrence exposure next section describe hiabu embed left truncation right censoring framework contrast linton hereby allow filtering correlated survival time represented covariate process briefly summarize general model assuming observe iid copies stochastic processes denotes counting process zero time zero jumps size one process takes values value indicates individual risk finally covariate process values rectangle multivariate process adapted filtration satisfies usual conditions assume satisfies aalen multiplicative intensity model lim deterministic function called hazard function failure rate individual time given covariate left truncation right censoring time covariates prominent example aalen multiplicative intensity model filtered observation due left truncation right censoring show embed model covariate possibly carrying truncation censoring information aalen multiplicative intensity model every covariate coordinate carry individual truncation information long corresponds left truncation observe set compact holds set allowed random independent given covariate process furthermore subject right censoring censoring time assume also conditional independent given covariate process includes case censoring time equals one covariate coordinate conclusion observe iid copies min truncated version arises conditioning event subject define counting process tei respect filtration class completes filtration straightforward computations one conclude setting including aalen multiplicative intensity model satisfied lim tei smooth backfitting proportional hazards smooth backfitting estimator multiplicative hazards describe smooth backfitting problem two steps first data projected onto unconstrained space resulting unconstrained estimator secondly unconstrained estimator projected onto multiplicative space interest first show local constant local linear projection first step lead estimators simple ratios smoothed occurrence smoothed exposure resembles simple structure known local constant estimator regression however local linear regression satisfy simple structure background approach able derive general underlying conditions smooth backfitter work encompass local constant local linear estimators fact encompasses estimators simple ratio structure including local polynomial kernel hazard estimators situations unconstrained estimator expressed simple ratio first step projecting data onto unstructured space resulting unconstrained estimator section concentrate local constant local linear estimators defined innielsen projection data onto unconstrained space notice two estimators expressed ratio smoothed occurrence smoothed exposure important next section unconstrained estimator projected multiplicative space interest introduce notation also set coordinates write hazard estimate components structured hazard need unstructured pilot estimator hazard first propose local linear kernel estimator based least squares nielsen value defined solution equation dni arg min lim dni following restrict multiplicative kernel bandwidth simplicity notation bandwidth depend general choices would possible cost extra notation local linear estimator includes boundary corrections bias order boundary interior support namely general case varying bandwidths consider local constant estimator achieves slower rates boundary region local polynomial estimators higher order like regression usual hiabu drawback known higher order kernels perform poorly long sample sizes large solution least square minimisation rewritten ratio smooth estimators number occurrence exposure see details dni components vector xij entries djk matrix given djk xij xik compare local linear estimator defined bll estimator local constant version defined similar ratio blc smoothed occurrence smoothed exposure given dni standard smoothing conditions chosen order bias blc order variance order optimal rate convergence corresponding regression problem see stone asymptotic theory estimators see linton bll structured smooth backfitting estimator via minimization section project unconstrained estimator previous section onto multiplicative space interest due filtering observations assumed available subset full support estimators restricted set assumptions data generating functions smooth backfitting proportional hazards given next section calculations simplify via new principle call solutionweighted minimization assume solution use strategically least squares weighting procedure directly feasible compute made feasible defining iterative procedure sequel assume multiplicative structure hazard functions constant identifiability components make following assumption dxj weight function also need following notation denoting density corresponding respect lebesgue measure also define define estimators hazard components solution following system equations xxk xxk dxk xxk denotes set furthermore timators discuss system solution probability tending one next section show asymptotic properties estimator see require estimators consistent need asymptotic consistency marginal averages estimators see already highlights estimator efficiently circumvents curse dimensionality practice system solved following iterative procedure xxk xxk hiabu finite number cycles termination criterion applies last values multiplied factor constraint fulfilled choice always achieved multiplication constants gives backfitting approximations estimator motivated weighted least squares estimator random weights see consider estimator minimizes min weighting unconstrained estimator gives described via backfitting equation xxk asymptotic variance kernel estimators proportional see linton nielsen motivates choice however choice possible unknown one could use pilot estimators follow another idea propose weight minimization solution choose heuristically putting plugging get next section discusses existence asymptotic properties solution asymptotic properties smooth backfitter multiplicative hazards estimator defined solution nonlinear operator equation going approximate equation linear equation interpreted equation arises nonparametric additive regression models show solution linear equation approximates linear equation solution well understood theory additive models essential step arrive asymptotic understanding estimator assumptions standard nature marker dependent hazard papers verified local constant smooth backfitting proportional hazards local linear estimators interested see particular nielsen linton nielsen linton related calculations however one notice conditions restricted local constant local linear smoothers even tight kernel smoothers smoother could used long obeys structure ratio smoothed occurrence smoothed exposure main theorem make following assumptions hereby make assumptions full support subset make use following notations xxj defined equation xxj denotes set furthermore define values values values function two times continuously differentiable inf hazard two times continuously differentiable inf kernel compact support without loss generality supposed furthermore symmetric continuous holds constant holds dxj dxk holds marginal occurences bounded bounded away holds dxj dxk sup dxj sup dxj note assumptions standard kernel smoothing theory assumptions assume marginal occurrences bounded bounded away make assumption marginal occurrences property allows support marginal density ojk triangular shape constant easily seen suppose simplicity ojk hiabu uniform density triangle dxj dxk thus assumption marginals fulfilled one easily verify also holds example discussion extended shapes marginals differ rectangular supports note also trivially hold marginal bounded away zero solutions rewritten solutions xxk xxk since difference two zerok terms zero well xxk xxk xxk xxk note defined root operator motivated define approximation solution linear equation xxk constraint dxk smooth backfitting proportional hazards equal constraint choice constraint one note norming constraint used practice unknown simplify theoretical discussion results carried feasible weighting equation rewritten integral equation second kind dxj note functions depend integral equation also simply written integral operator kernel come point state proposition show approximates gives asymptotics decompose next results need conditions estimators three terms depend deterministic functions terms defined hiabu note deterministic functions typical choices expectations stochastic part smoother bias terms well understood easily treated standard smoothing theory want develop asymptotic theory estimators asymptotic properties described properties dxk use following normings quantities dxk assume following assumptions hold holds log uniformly sup sup holds function dxj dxj holds sup dxk log smooth backfitting proportional hazards holds sup sup holds shortly discuss assumptions condition mild consistency assumption smoother also weak condition setting smoothers typically standard limit result many smoothers assumes rates stochastic part bias part nonparametric smoother standard smoothness assumptions interpretation note integral left hand side formula global average local average global weighted average mean zero random varibales thus one expects rate integral supremum integrals one expects log rate faster required rate states bound total number occurrences easily verified assumption one dimensional marginals bounded furthermore one easily check marginals bounded twodimensional marginal properties discussed example assumption following proposition states stochastic expansion proposition make assumptions function introduced exists uniquely defined probability tending one moreover following expansion function define sup max furthermore function defined dxj dxj dxj proposition get corollary asymptotic distribution proposition make assumptions holds bias distribution additional assumption order hiabu following theorem states indeed good approximation relative estimation error theorem assumptions holds probability tending one exists solution solves equation solution get distribution forecasting outstanding loss liabilities chain ladder method popular approach estimate outstanding liabilities started deterministic algorithm used today almost every single insurance policy world business insurance many developed countries insurance industry revenues amounting around therefore comparable smaller banking industry every single product sold chain ladder method actuaries hardly use methods comes estimating outstanding liabilities eventually aggregate reserve single biggest number insurers balance sheets insurers liabilities often amount many times underlying value company europe alone outstanding liabilities estimated accumulate around therefore obvious importance estimate far best possible estimate describe section methodology introduced paper applied provide solution challenging problem analyze reported claims motor business line cyprus data set used hiabu consists number claims reported years days claims reported data given denotes underwriting date claim time underwriting date date report claim days also called reporting delay hence notation previous sections covariate underwriting date depend time dimension data exist triangle december subset full support january aim forecast number future claims contracts written past reported yet figure shows observed data lie triangle forecasts required triangle added first completes square implicitly assumed maximum reporting delay claim years actuaries call assumption triangle fully run data set reasonable assumption looking figure smooth backfitting proportional hazards fig histogram claim numbers motor business line axis represents underwriting time months axis reporting delay months hiabu classical chain ladder method able provide simple solution problem recently pointed method viewed multiplicative density method original random variable density authors suggested embed method standard mathematical statistical vocabulary engage mathematical statisticians future developments particular showed one could consider traditional chain ladder estimator multiplicative histogram continuous framework presented alternative projecting unconstrained local linear density onto multiplicative subspace approach called continuous chain ladder analyzed mammen lee providing full asymptotic theory underlying density components related approach hiabu see also hiabu proposes transform multiplicative continuous chain ladder problem two continuous hazard estimation problems via elegant trick application considered paper generalizes important reversed hazards multiplicatively structured hazard way continuous chain ladder improved generalized allowing flexibility estimation outstanding liabilities insurance business hiabu assumed independent means underwriting date claim effect reporting delay going impose strong restriction order discuss independence assumption consider figure points plots derived first transforming data triangle dimension dimension see aggregating data quarterly triangle also hiabu one derives quarterly hazard rate ratio values scaled occurrence exposure norming factor letting start around function fixed final values displayed figure show plots since almost claims reported five quarters independence assumption hiabu satisfied points lie around horizontal line plot multiplicative hazard assumption paper satisfied smooth shape allowed four graphs must equal correction noise model defined graphs first component fixed mimic quarterly version inspecting four plots one argue see negative drift similar magnitude graph values decaying around indicates approach paper give better fit data compared model hiabu discussion continue embedding observations proportional hazard framework afterwards show hazard estimate used forecast number outstanding claims first note apply approach paper directly since application observe smooth backfitting proportional hazards fig scaled quarterly hazard rates first four development quarters right truncation analogue hiabu transform random variable result right truncation truncation becomes left truncation thus consider random variable variable interest notation considered section conclude counting process tir satisfies aalen multiplicative intensity model respect filtration given section lim tir therefore estimate unstructured hazard using local linear estimator estimator described section note components multiplicative conditional hazard computed estimators require choice bandwidth parameter assumed scalar order simplify notation paper application generalize restriction allowing different smoothing levels dimension namely reporting delay underwriting time bandwidth parameter vector estimate using see details appendix alleviate computational burden aggregated data triangle considering bins two days applying discrete version estimators described appendix several trials run minimization hiabu differences hazard estimates underwriting development fig difference structured unstructured hazard estimator grid bandwidth components days results estimation procedure given figure figure first figure shows estimated components multiplicatively structured hazard estimator latter shows difference structured unstructured estimators finally total number outstanding claims reserve estimated fzi reserve exp fbzi note fbz estimator conditional density survival time reserve also decomposed provide cash flow next periods future divided periods length amount claims smooth backfitting proportional hazards underwriting year delay years fig estimated multiplicative hazard components table number outstanding claims future quarters backfitting approach paper compared chain ladder method clm approach hiabu future quarter tot hiabu clm forthcoming ath period estimated fzi reservep table shows estimated number outstanding claims future quarters compare approach paper results derived hiabu traditional chain ladder method two latter approaches common assume independence underwriting date reporting delay see approaches estimate similar total number outstanding claims reserve two approaches distributions quarters different results obtained method proposed paper seems violation independence assumption big influence reserve since balances different development patterns arising different periods however problem becomes quite serious one interested detailed estimates like cash flow conclusion paper provided first introduction smooth backfitting survival analysis hazard estimation starting point popular proportional hazard model fully nonparametric components one could imagine smooth backfitting could play role long list structured problems semiparametric nonparametric survival analysis one could example imagine smooth backfitting provide useful extensions practical dynamic survival models martinussen scheike one also think applications many known extensions cox regression model understanding link data estimators improved via direct projection approach smooth backfitting hiabu bandwidth selection crucial problem practice finding right amount smoothing using nonparametric approaches application described paper considered maybe straightforward way estimate optimal bandwidth method method density estimation goes back rudemo bowman nowadays slightly modified version see hall used aims minimize integrated squared error framework bandwidth proposed nielsen linton arises idea minimize integrated squared error expanding square two three terms depend therbandwidth thus considered feasible estimate done unbiased estimator dni version arises definition structured estimator setting finally define bandwidth bcv bcv arg min dni theoretical properties hazard estimation one dimensional case derived mammen knowledge theoretical analysis multivariate hazard case paper extensive simulation study multivariate case found proofs proof proposition proof proposition follows lines proof theorem mammen needs major modifications last steps proof weaker assumptions ones assumed latter theorem outline first part proof mammen also goes weaker assumptions show additional arguments used therlast part note assumptions get ojk dxj dxk lemma mammen implies constants max max smooth backfitting proportional hazards denotes norm furthermore one gets sup operator dxk furthermore note holds dxj dxk dxj dxk equations follow see note imply log uniformly holds gives log uniformly together implies three equations lemma mammen conclude equations lej probability tending one define replaced bjk dxj particular put dxj led sup lej arguing first part lemma mammen gives functions lek defined tbl probability tending one constant put brd tbl hiabu point followed closely arguments proof theorem mammen arguments parts proof latter theorem would need notation sup dxk bounded constant probability tending one would imply probability tending one constant functions sup dxk ckgk seen application inequality proof theorem mammen shows used show probability tending one constant unfortunately setting hold thus follow holds setting indeed one check general hold assumptions consider discussed statement assumption thus map function bounded function bounded also hold replace weighted norm argue twice application function bounded transformed function bounded follows following two estimates functions constant dxk dxk dxj sup dxk dxk furthermore holds probability tending one dxk dxk dxj sup dxk dxk also use function bounded mapped function bounded follows sup dxk sup sup dxk sup smooth backfitting proportional hazards probability one show bound follows directly last inequality condition proof note left hand side bounded constant times xxj dxj dxk dxk thus follows application first inequality condition proof note left hand side bounded constant times sup dxj dxk follows application second inequality condition proof one uses show left hand sides equations respectively condition order one replaces thus one show using arguments proofs want show brd define using fact probability tending one one gets suffices show choices log tbl proof claim suffices show norm summand order log shown using condition hiabu dxk dxj log dxk dxk sup dxk log dxk sup log sup claims shown similarly using additionally condition see also note sum elements equal one checks easily statement proposition remains show proof two claims one applies condition proof proposition statement proposition follows immediately proposition proof theorem main tool prove theorem theorem see example deimling since theorem central considerations state theorem theorem consider banach spaces map assume derivative exists invertible following conditions satisfied lkx smooth backfitting proportional hazards equation unique solution furthermore approximated newtons iterative method holds kxk come proof theorem proof theorem equation rewritten fbk fbk xxk dxk dxk depend note define additional operator following equations hiabu xxk note derivatives xxk xxk main idea proof apply theorem theorem mapping norm abuse notation also denote starting point choose application theorem spaces equal dxj consider operators note last assumption note get last assumption smooth backfitting proportional hazards similarly one uses last assumption show probability tending one show locally lipschitz around exist constants probability tending one furthermore show invertible argue application theorem imply implies statement theorem inequality show imply since also holds constant probability tending one gives condition theorem furthermore application get together gives therefore probability tending one condition also holds replaced thus get conditions theorem fulfilled probability tending one shows remains show proof note lipschitz first order taylor expansion yields equation follows claim follows directly assumption proof show invertible proof claim start showing bijective proof injectivity assume hiabu show implies holds xxk implies thus holds furthermore get dxk xxk summing terms get implies application implies constant functions implies check surjective consider dxk show implies element perpendicular range space since linear shows surjective smooth backfitting proportional hazards one gets choice dxk xxk exactly arguments injectivity conclude thus shown invertible remains show bounded bounded inverse theorem claim suffices show bounded boundedness shown application concludes proof theorem discrete data data given idisc idisc define occurrence exposure unstructured local linear hazard estimator becomes ddisc disc ddisc disc ddisc disc discrete versions respectively disc disc disc disc disc hiabu criterion written thus finally exp acknowledgements research second author supported deutsche forschungsgemeinschaft research training group rtg third author acknowledges support spanish ministry economy competitiveness grant number includes support european regional development fund erdf references andersen borgan gill keiding statistical models based counting processes new york springer beran nonparametric regression randomly censored survival data technical report dept univ california berkeley bowman alternative method smoothing density estimates biometrika dabrowska regression censored survival time data scand actuar deimling nonlinear functional analysis berlin springer fan gijbels king local likelihood local partial likelihood hazard regression ann stat janys nielsen bandwidth selection marker dependent kernel hazard estimation comput stat data hall large sample optimality least squares density estimation ann stat hastie tibshirani generalized additive models london chapman hall smooth backfitting proportional hazards hiabu relationship classical chain ladder granular reserving scand actuar appear hiabu mammen nielsen forecasting local linear survival densities biometrika lee mammen nielsen park asymptotics density forecasting ann stat lee mammen nielsen park operational time density forecasting ann stat appear lin huang global partial likelihood estimation additive cox proportional hazards model statist plann inference linton nielsen kernel method estimating structured nonparametric regression based marginal integration biometrika linton nielsen van geer estimating multiplicative additive hazard functions kernel methods ann stat mammen linton nielsen existence asymptotic properties backfitting projection algorithm weak conditions ann stat mammen nielsen forecasting applied reserving mesothelioma insurance math econom nielsen sperlich verrall continuous chain ladder reformulating generalising classical insurance problem expert syst appl martinussen scheike dynamic regression models survival data new york springer mckeague utikal inference nonlinear counting process regression model ann stat nielsen marker dependent kernel hazard estimation local linear estimation scand actuar nielsen linton kernel estimation marker dependent hazard model ann stat nielsen sperlich smooth backfitting practice roy statist soc ser rudemo empirical choice histograms kernel density estimators scand stat stone optimal global rates convergence nonparametric regression ann stat hiabu stone additive regression nonparametric models ann stat park mammen smooth backfitting generalized additive models ann stat
10
appear theory practice logic programming oct diagrammatic confluence constraint handling technical university madrid abstract confluence fundamental property constraint handling rules chr since rewriting formalisms guarantees computations dependent rule application order also implies logical consistency program declarative view paper concerned proving confluence nonterminating chr programs purpose derive van oostrom decreasing diagrams method novel criterion chr critical pairs generalizes preexisting subsequently improve result modularity chr confluence permits modular combinations possibly confluent programs without loss confluence keywords chr confluence decreasing diagrams modularity confluence introduction constraint handling rules chr constraint logic programming language introduced easy development constraint solvers matured concurrent programming language operationally chr program consists set guarded rules rewrite multisets constrained atoms declaratively chr program viewed set logical implications executed deduction principle confluence basic property rewriting systems refers fact two finite computations starting common state prolonged eventually meet common state confluence important property language desirable computations dependent particular rule application order particular case chr property even desirable guarantees correctness program abdennadher program confluent consistent logical reading confluence chr program also fundamental prerequisite logical completeness results abdennadher research leading results received funding programme attracting talent young phd montegancedo campus international excellence picd madrid regional government project prometidos spanish ministry science mec project doves makes possible program parallelization meister may simplify program equivalence analyses abdennadher following pioneering research abdennadher existing work dealing confluence chr limits terminating programs see instance works abdennadher duck nonetheless proving confluence without global termination assumptions still worthwhile objective theoretical point view interesting topic illustrated following example typical chr programs fail terminate level abstract semantics even terminate concrete levels indeed number analytical results language rest notion confluence programs considered respect abstract semantics instance current state knowledge even result important guarantee correction confluence holds programs considered respect general operation semantics chr namely abstract semantics example partial order constraint let classic chr introductory example namely constraint solver partial order consists following four rules define meaning symbol using equality constraint duplicate reflexivity antisymmetry transitivity duplicate rule implements duplicate removal words states two copies atom present one removed reflexivity transitivity rules respectively state atom form removed two atoms substituted constraint finally transitivity rule propagation rule states present atom may added well know program like program using propagation rules faces trivial problem considered respect abstract semantics indeed semantics propagation rule applies state produces leading trivial loops order solve problem abdennadher proposed semantics propagation rules may applied combination atoms nonetheless proposal solve problems termination indeed transitivity rule may loop queries containing cycle chain inequalities considered abdennadher semantics consider instance query fact order terminating rules reflexivity antisymmetry transitivity must priority transitivity rule behaviour achieved considering concrete semantics refined semantics diagrammatic confluence constraint handling rules duck semantics reduce chr execution model applying rules textual order exchange gaining termination concrete semantics lose number analytical results instance explained although chr program run parallel abstract semantics one obtain incorrect results programs written refined semantics mind indeed result program relies particular rule application order parallel execution garble order leading unexpected results interestingly confluence abstract possibly level may come rescue concrete semantics program confluent semantic level rule application order specified result dependent particular application order similar considerations discussed equivalences chr programs practical point view proving confluence without assumption termination important may desirable prove confluence program termination inferred indeed exist simple programs collatz function termination conjecture guy furthermore since chr language analytical tools language must handle programs terminate semantic instance interpreters language sneyers typical concurrent programs see numerous examples concurrent systems given milner also recently demonstrated execution models chr yield elegant frameworks programming coinductive reasoning motivating example class intrinsically programs use following solution seminal dining philosophers problem example dining philosophers consider following chr program implements solution dining philosophers problem extended count number times philosopher eats eat thk atom represents fork atom resp represents eating thinking philosopher seated forks already eaten times one hand rule eat states thinking philosopher seated two forks lying table may start eating picked forks hand rule thk states philosopher may stop eating puts forks using initial state corresponding dining philosophers seated around table encoded set atoms despite fact program intrinsically may interested confluence example may make use one previously mentioned applications confluence simplifies observational equivalence confluence may also simplify proofs mental properties concurrent systems instance absence deadlock starting initial state one easily construct derivation ith philosopher eaten arbitrary number times hence confluent infer possible extend finite derivation ith philosopher eats strictly derivation leads deadlock best knowledge existing principle proving confluence programs strong confluence criterion fages raiser tacchella however criterion appears weak apply common chr programs examples paper concerned extending chr confluence theory able capture large class possibly programs purpose derive decreasing diagrams technique novel criterion generalizes existing confluence criteria chr decreasing diagrams technique method developed van oostrom subsumes sufficient conditions confluence applying method requires local rewrite peaks points rewriting relation diverges completed decreasing diagrams present paper presents two main contributions sect present particular instantiation decreasing diagrams technique chr show context particular instantiation verification decreasingness restricted standard notion critical pairs sect extend modularity confluence able combine programs independently proven confluent without losing confluence preliminaries abstract confluence section gather required notations definitions results confluence abstract rewriting systems terese compendium referred detailed presentation rewrite relation rewrite short binary relation set objects rewrite symbol denote converse reflexive closure transitive closure closure use denote rewrites family rewrites set rewrites indexed set labels family set denote union reduction finite sequence rewriting steps form reduction would abreviated intermediary states relevant peak pair reductions common element local peak peak formed two reductions valley pair reductions ending common element peak joinable true fig confluence fig local confluence diagrammatic confluence constraint handling rules fig strong confluence rewrite terminating infinite sequence form furthermore say confluent holds locally confluent holds strongly holds figures graphically represent definitions following standard diagrammatic notation solid edges stand universally quantified rewrites dashed edges represent existentially quantified rewrites seminal lemma newman know terminating locally confluent rewrite confluent another famous result due huet ensures strong confluence implies confluence present slight variation due hirokawa middeldorp decreasing diagrams technique suitable purposes interest decreasing diagrams method van oostrom reduces problems general confluence problems local confluence exchange method requires confluence diagrams way peaks close decreasing respect labeling provided wellfounded preorder method complete sense countable confluence rewrite equipped labeling confluence undecidable property finding labeling may difficult rest paper say preorder wellfounded strict preorder associated iff terminating relation let family rewrites wellfounded preorder local peak decreasing respect following holds set labels stands family rewrites locally decreasing local peaks form decreasing respect common wellfounded preorder rewrite locally decreasing union decreasing families rewrites property graphically represented figure sake simplicity use definition weaker one huet worth noting counterexamples given introduction stay relevant general definition fig local decreasingness theorem decreasing diagram van oostrom countable rewrite confluent locally decreasing recall results used later lemma terese rewrites rewrites confluent iff confluent preliminaries constraint handling rules section recall syntax semantics chr book referred general overview language syntax formalization chr assumes language constraints containing equality theory defines atoms using different set predicate symbols following denote arbitrary set identifiers slight abuse notation allow confusion conjunctions multiset unions omit braces around multisets use comma multiset union use denote set free variables formula notation denotes existential closure exception free variables chr program finite set eponymous rules form kept head removed head user body multisets atoms guard body conjunctions constraints rule name identifier assumed unique program rules heads empty prohibited empty guard resp empty kept head omitted symbol resp symbol rules diagrammatic confluence constraint handling rules divided two classes simplification removed head propagation rules otherwise propagation rules written using alternative syntax operational semantics section recall operational semantics raiser equivalent abstract semantics general operational semantics chr prefer former includes rigorous notion equivalence essential component confluence analysis chr state tuple user store multiset atoms store conjunction constraints global variables finite set variables unsurprisingly local variables state variables state global confusion occur syntactically merge user stores may futhermore omit global variables component states local variables following use denote set states following raiser always implicitly consider states modulo structural equivalence formally state equivalence least equivalence relation states satisfying following rules states considered modulo equivalence operation semantics chr expressed single rule formally operational semantcs program given least relation states satisfying rule renaming program confluent resp terminating confluent resp terminating going recall important property chr semantics property monotonicity means transition possible state transition possible larger state help reduce level verbosity introduce notion quantified conjunction states fages operator allows composition states disjoint local variables quantifying global variables changing global variables unlike standard presentations definition distinguish simplification rules form simpagation rules local ones formally quantified conjunction binary operator states parametrized set variables satisfying note side condition restrictive local variables always renamed using implicit state equivalence proposition monotonicity chr let chr program chr states set variables declarative semantics owing origins tradition clp chr language features declarative semantics direct interprestation logic formally logical reading rule form guarded equivalence logical reading program within theory conjunction logical readings rules constraint theory denoted operational semantics sound complete respect declarative semantics abdennadher furthermore program confluent respect consistent logical reading abdennadher diagrammatic confluence constraint handling rules section concerned proving confluence large class chr programs indeed explained introduction existing criteria sufficiently powerful infer confluence common programs see examples concrete examples avoid limitation derive decreasing diagrams technique novel csriterion chr critical pairs generalizes local strong confluence criteria analogue criterion developed linear term rewriting systems trs jouannaud van oostrom labels constraint handling rules order apply decreasing diagram technique chr need first label chr transitions work use two labelings proposed van oostrom trs first one consists labeling transition name applied rule labeling diagrammatic confluence constraint handling rules ideal capturing strong properties linear trs within proof main result also use consists labeling transition source second labeling captures confluence terminating rewrites practice assume set rule identifiers defined disjoint union given program denote resp set rules form built resp call inductive part subsequently assume terminating called coinductive typically definition chr program family rewrites indexed rule identifiers preorder rule identifiers admissible inductive rule identifier strictly smaller coinductive one holds critical peaks trs basic techniques used prove confluence consist showing various confluence criteria finite set special cases called critical pairs critical pairs generated superposition algorithm one attempts capture general way sides two rules system may overlap notion critical pairs successfully adapted chr abdennadher introduce slight extension notion takes account defined definition critical peak let assume chr rules renamed apart critical ancestor state rules state form satisfying following properties following tuple called critical peak critical peak program program critical peak rule rule critical peak program critical peak critical peak inductive involves inductive rules critical peak coinductive involves least one coinductive rule critical peak example consider solver partial order given example following ciritial peak stems overlapping heads rules antisymmetry transitivity anti trans come main result showing study decreasingness respect restricted critical peaks without loss generality definition critical program critically admissible preorder inductive part terminating inductive critical peaks joinable coinducitve critical peaks decreasing program respect admissible preorder program strongly purely coinductive without inductive rules theorem programs confluent proof let assume program given preorder let family rewrites indexed rule state defined inductive part coinductive part let union assuming without loss generality finite trivially wellfounded obtain wellfounded help theorem suffices prove peak decreasing distinguish two cases rules used respectively produce apply different parts monotonicity chr transitions infer show valley respects property within definition decreasing diagrams proceed cases types rules inductive inductive since terminating infer conclude peak decreasing diagrammatic confluence constraint handling rules coinductive conclude peak decreasing coinductive inductive case symmetric case coinductive conclude peak decreasing applications rules used respectively produce overlap exist critical peak state set variables proceed cases types rules rules inductive hypothesis monotony chr infer construction get conclude discussion decreasingness peak necessary notice hold one rules coinductive hypothesis equivalently monotony chr theorem strictly subsumes criteria proving confluence chr programs aware namely local confluence abdennadher strong confluence fages criteria corollary local confluence terminating program confluent critical peaks joinable corollary strong confluence program confluent critical peaks joinable following examples show criterion powerful local strong confluence criteria trans anti ref lex anti trans anti dupl anti fig critical peaks example consider solver partial order given example since trivially one apply local confluence criterion strong confluence apply either joinable critical peaks instance considere peak given example anti trans seen may reduced side rewritten side less two steps using reflexivity antisymmetry rules nonetheless confluence deduced using full generality theorem purpose assume rules except transitivity inductive take admissible preorder clearly inductive part terminating indeed application one three first rules strictly reduces number atoms state systematic analysis critical peaks prove peak closed respecting hypothesis ruledecreasingness fact critical peaks closed without using transitivity diagrams involving transitivity rule given examples figure example consider program implementing dining philosophers problem given example confluence inferred either local strong confluence one hand obviously hence prevents application local confluence criterion hand critical peaks consider example peak given figure critical rule eating joinable however figure shows joinable thk eat thk thk eat thk peak decreasing fact critical peaks involve rule eat may closed similar manner thus assuming eat rule coinductive strictly greater thk infer using theorem confluent diagrammatic confluence constraint handling rules eat eat thk thk eat eat thk thk fig critical peak program partitioning criterion based division program terminating part possibly one since program partitioned multiple ways may case program depends splitting used see example purely theoretical point view particular drawback since property aim proving confluence program undecidable pragmatical point view appears classic examples chr programs proved without assumption termination particular unable find counterexample confluent program book example consider chr solver partial order given example assuming rule coinductive shown strongly respect order satisfying transitivity duplicate antisymmetry reflexivity illustrated figure critical peaks involving transitivity rules may closed using rules strictly smaller similarly one verify critical peak given rule smaller equal one closed using rules strictly smaller peaks trivialy decreasing choice good partition may simplify proofs maximizing inductive part program number peaks must proved decreasing coinductive critical peaks reduced indeed joinability peak respect inductive part program must terminating decidable problem efficiently peak respect possibly program likely consequently good partition limit use heuristics human interactions necessary infer diagram coinductive critical peak since termination also undecidable property expect fully automatize search optimal partition must content heuristic procedures despite fact formal development procedures beyond scope paper practical experience suggests trivial partitioning may interesting partition consists considering inductive rules strictly reduce number atoms state even choice necessarily optimal may even produce bad partitions seem produce relevant partitions typical chr solvers illustrated example give two counterexamples first shows dependent particular splittings second presents confluent program example consider following chr rules duplicate denote program built duplicate rules program built duplicate rules clearly terminating duplicate rule strictly reduces number atoms state leaves number atoms unchanged strictly reduces size argument one also verify single critical peak figure shows way peak may closed thus assuming rules inductive infer program however assumed coinductive verify sole critical peak decreasing respect admissible order case yields one critical peak decreasing respect admissible order see figure however time ing assumed inductive consequently inferred confluent using theorem modularity chr confluence section concerned proving confluence union confluent programs modular way particular programs proved confluent using criterion practice improve result see works chr local confluence abdennadher abdennadher decreasingness peak given order seems difficult problem joinability without termination undecidable diagrammatic confluence constraint handling rules fig critical peak fig critical peak states terminating union confluent programs overlap critical peak confluent particular allow overlapping drop termination hypotheses theorem modularity confluence let two confluent chr programs critical peak joinable confluent formally proving theorem worth noting despite fact modularity confluence theorem similar flavors results different scopes indeed one hand modularity confluence assume anything way confluent instance two programs theorem require union inductive parts terminating theorem important since termination modular property even two terminating programs share atoms one sure union terminating see section book details hand criterion allows critical peaks closed complex way theorem permits proof theorem rests following lemma states hypotheses theorem strongly commutes lemma critical peaks proof prove induction length derivation peak property holds base case immediate inductive case know induction hypothesis exists state sufficient prove use definition relation composition order conclude assume otherwise holds trivially distinguish two cases either rules involved local peak apply different parts else applications overlap first case use chr monotonicity infer state set second case must exist critical peak variables hypotheses chr monotonicity obtain results proof theorem let one hand confluence note hand combining lemma case lemma infer trivial application theorem find confluent conclude noting apply case lemma worth noting equals neither conclusion employing decreasing diagrams technique chr established new criterion chr confluence generalizes local strong confluence criteria crux novel criterion rests distinction terminating part inductive part part coinductive part program together labeling transitions rules importantly demonstrate particular case proposed application decreasing diagrams check decreasingness restricted sole critical pairs hence making possible automatize process also improve result modularity confluence allows modular combination programs without loss confluence worth saying diagrammatic proofs sketched paper systematically verified prototype diagrammatic confluence checker practice checker automatically generates critical pairs program provided admissible order using tactics finit sets reductions tries join respecting current work involves investigating development heuristics automatically infer without human interaction also plan develop new completion procedure based criterion presented duplicate removal important programming idiom chr development new techniques capable dealing confluent programs like given example also worth investigating diagrammatic confluence constraint handling rules references abdennadher operational semantics confluence constraint propagation rules proceedings international conference principles practice constraint programming lncs vol springer berlin germany abdennadher operational equivalence chr programs constraints proceedings international conference principles practice constraint programming lncs vol springer berlin germany abdennadher meuss confluence constraint handling rules proceedings international conference principles practice constraint programming lncs vol springer berlin germany abdennadher meuss confluence semantics constraint simplification rules constraints duck stuckey banda holzbaur refined operational semantics constraint handling rules proceedings international conference logic programming iclp lncs vol springer berlin germany duck stuckey sulzmann observable confluence constraint handling rules proceedings international conference logic programming iclp lncs vol springer berlin germany theory practice constraint handling rules logic programming special issue constraint logic programming parallelizing constraint handling rules using confluence proceedings international conference logic programming iclp lncs vol springer berlin germany constraint handling rules cambrige university press cambrige guy unsolved problems number theory problem books mathematics springer berlin germany semantics constraint handling rules theory practice logic programming int conference logic programming iclp special issue observational equivalences linear logic concurrent constraint languages theory practice logic programming int conference logic programming iclp special issue fages abstract critical pairs confluence arbitrary binary relations proceedings international conference rewriting techniques applications rta number lncs springer berlin germany hermenegildo clp projection constraint handling rules international acm sigplan conference principles practice declarative programming ppdp acm press new york usa hirokawa middeldorp decreasing diagrams relative termination ijcar lncs vol springer berlin germany huet confluent reductions abstract properties applications term rewriting systems abstract properties applications term rewriting systems journal acm jouannaud van oostrom diagrammatic confluence completion proceedings internatilonal collogquium automata languages programming icalp lncs vol springer berlin germany meister parallel implementation algorithm chr workshop logic programming wlp infsys research report vienna austria milner communicating mobile systems cambrige university press cambrige newman theories combinatorial definition equivalence annals mathematics raiser betz equivalence chr states revisited proceedings international workshop constraint handling rules chr report kath univ leuven leuven belgium raiser tacchella confluence chr programs proceedings international workshop constraint handling rules chr sneyers schrijvers demoen computational power complexity constraint handling rules acm trans program lang syst terese term rewriting systems cambrige university press cambrige van oostrom confluence decreasing diagrams theor comput sci van oostrom confluence decreasing diagrams converted proceedings international conference rewriting techniques applications rta lncs springer berlin germany
6
design exploration hybrid deep generative architectures jan vivek parmar member ieee manan suri member ieee indian institute hauz khas new email manansuri learning applications gained tremendous interest recently academia industry restricted boltzmann machines rbms offer key methodology implement deep learning paradigms paper presents novel approach realizing hybrid based deep generative models dgm proposed hybrid dgm architectures hfox based switching oxram devices extensively used realizing multiple computational functions synapses weights internal storage iii stochastic neuron activation programmable signal normalization validate proposed scheme simulated two different architectures deep belief network dbn stacked denoising autoencoder classification reconstruction digits reduced mnist dataset images contrastivedivergence specially optimized oxram devices used drive synaptic weight update mechanism layer network overall learning rule based greedylayer wise learning back propagation allows network trained good stage performance simulated hybrid dgm model matches closely software based model deep network test accuracy achieved dbn mse sda network lower software based approach endurance analysis simulated architectures show epochs training single rbm layer maximum switching oxram device cycles index learning stacked denoising autoencoder sda deep belief network dbn rram deep generative models restricted boltzmann machine rbm ntroduction neurons brain capacity process large amount high dimensional data various sensory inputs still focusing relevant components decision making implies biological neural networks capacity perform dimensionality reduction facilitate decision making field machine learning artificial neural networks also require similar capability availability massive amounts high dimensional data generated everyday various sources digital information thus becomes imperative derive efficient method dimensionality reduction facilitate tasks like classification feature learning storage etc deep generative networks autoencoders shown perform better many commonly used statistical techniques pca principal component analysis ica independent component analysis encoding decoding high dimensional data networks traditionally trained using gradient descent based however observed deep networks gradient descent converge gets stuck local minima case purely randomized initialization solution problem weight initialization utilizing generative training procedure based contrastive divergence algorithm maximize performance algorithm dedicated hardware implementation required accelerate computation speed traditionally cmos based designs used utilizing commonly available accelerator like gpus fpgas asics etc recently introduction emerging memory devices pcm cbram oxram mram etc optimization possible design dedicated hardware accelerators given fact allow replacement certain large cmos blocks simultaneously emulating storage compute functionalities recent works present designs contrastive divergence based learning using resistive memory devices authors propose use model synapse store one synaptic weight authors experimentally demonstrated rbm realized resistive phase change memory pcm elements trained variant contrastive divergence algorithm implementing hebbian weight update designs justify use rram devices dense synaptic arrays also make use spike based programming mechanism gradually tuning weights negative weights implemented using two devices place single device per synapse apparent order implement complex learning rules larger deeper networks hardware complexity area footprint increases considerably using simplistic design strategy result need increase increase functionality rram devices design beyond simple synaptic weight storage described design exploiting intrinsic variability substitute randomly distributed hidden layer weights order gain area power savings made use another property rram devices exploiting variability device switching create stochastic neuron basic building block hybrid based restricted boltzmann machines rbm circuit paper build upon previous work hybrid rbm following novel contributions design deep generative models dgm utilize hybrid rbm building block design programmable output normalization block stacking multiple hybrid rbms simulation performance analysis two types dgm architectures synaptic weight resolution deep belief networks dbn stacked denoising autoencoders sda analysis learning performance accuracy mse using greedy training without backprop analysis learning impact rram device endurance hybrid dgm implementation oxram devices exploited four different storage compute functions synaptic weight matrix neuron internal state storage iii stochastic neuron firing programmable gain control block section discusses basics oxram deep generative networks section iii describes implementation details proposed hybrid dgm architectures section discusses simulation results section gives conclusions basics ram dgm rchitectures oxram working fig resistance distribution hfox device presented using optional selector device configuration oxram devices known demonstrate shown fig variability proposed architecture exploit oxram switching variability realization stochastic neuron circuit binary resistive switching realization synaptic weight internal state storage resistance modulation normalization block restricted boltzmann machines rbm fig basic characteristics hfox oxram device switching principle indicated experimental data corresponding device presented oxram devices structures sandwiching active based insulator layer metallic electrodes see fig active layer exhibits reversible switching behavior application appropriate programming across device terminals case oxram devices formation conductive filament active layer leads device dissolution filament puts device conductive filament composed oxygen vacancies defects resistance lrs level defined controlling dimensions conductive filament depends amount current flowing active layer current flowing active layer controlled either externally imposed current compliance unsupervised learning based generative models gained importance use deep neural networks besides useful supervised predictor unsupervised learning deep architectures interest learn distribution generate samples rbms particular widely used building blocks deep generative models dbn sda models made stacking rbm blocks top training models using traditional based approaches computationally intensive problem hinton showed models trained fast greedy layerwise training making task training deep networks based stacking rbms feasible rbm block consists two layers fully connected stochastic sigmoid neurons shown fig input first layer rbm called visible layer second feature detector layer called hidden layer rbm trained using fig graphical representation rbm nodes algorithm described output layer bottom rbm acts visible layer next rbm stacked denoising autoencoder sda autoencoder network deep learning framework mostly used denoising corrupted data dimensionality reduction weight initialization applications recent years random weight initialization techniques preferred use generative training networks however dgms continue ideal candidate dimensionality reduction denoising applications autoencoder network basically realized using two networks encoder network layers rbms stacked top one another mirrored decoder network weights encoder layer data reconstruction stack rbms autoencoder trained one unrolled autoencoder network encoder decoder shown fig deep belief network dbn dbns probabilistic generative models composed multiple layers stochastic latent variables latent variables typically binary values often called hidden units feature detectors top two layers undirected symmetric connections form associative memory lower layers receive directed connections layer states units lowest layer represent data vector typical dbn shown fig uses single rbm first two layers followed sigmoid belief network logistic regression layer final classification output two significant dbn properties efficient procedure learning generative weights determine variables one layer depend variables layer learning values latent variables every layer inferred single pass starts observed data vector bottom layer uses generative weights reverse direction dbns used generating recognizing images video sequences data low number units highest layer dbns perform dimensionality reduction learn short binary codes allowing fast retrieval documents images iii mplementation roposed rchitectures basic building block sda dbn rbm simulated architectures within single rbm block oxram devices used multiple functionalities basic rbm block shown fig replicated hidden layer memory states first rbm acting visible layer memory next rbm block fig rbm blocks common weight update module described section post training learned synaptic weights along sigmoid block used reconstructing test data architecture consist synaptic network synaptic network rbm block simulated using hfox oxram matrix synaptic weight digitally encoded group binary switching oxram devices number devices used per synapse depends required weight resolution architectures simulated work used resolution oxram synapse stochastic neuron block fig shows stochastic sigmoid neuron block neuron hidden visible sigmoid response fig basic rbm blocks stacked form deep autoencoder denoising noisy image using autoencoder fig dbn architecture comprising stacked rbms fig individual rbm training layer architecture rbm training block symbols represent hidden layer memory visible layer memory synaptic network respectively cascaded rbm blocks realizing proposed deep autoencoder shared weight update module fully digital based weight update module block level design single stochastic sigmoid neuron implemented using sigmoid circuit gain sigmoid circuit tuned optimizing scaling six transistors voltage output sigmoid circuit compared voltage drop across oxram device help comparator hfox based device repeatedly cycled intrinsic ron rof variability oxram device leads variable reference voltage comparator helps translate deterministic sigmoid output neuron output effectively stochastic nature given moment specific neuron output determines internal state needs stored rbm driven learning neuron internal state stored using individual oxram devices placed comparator single neuron sufficient state storage since rbm requires neuron binary activation state weight update block weight update module purely digital circuit reads synaptic weights internal neuron states updates synaptic weights learning based rbm algorithm block consists array weight update circuits one shown fig synaptic weight updated based previous current internal neuron states mutually connected neurons hidden visible layers realized using two gates comparator outputs input first gate previous internal neuron states input second gate current internal neuron states based comparator output learning rate either added subtracted applied current synaptic weight wij wij vht output normalization block order chain design rbm need ensure signal output layer enhancement dynamic range signal deteriorate network depth increases purpose proposed hybrid programmable normalization circuit see fig whose gain bias tuned based oxram resistance programming circuit schematic programmable normalization block shown fig order check variation gain considered programming oxram three different set states differential amplifier consisting gain control circuit biasing circuit used implement normalization function two stage amplifier consisting transistors used gain circuit controlled using constant circuit whose output fed constant circuit consists transistors one oxram based oxram resistance circuit changed thereby changing output potential affects vgs thereby controlling gain circuit validate design performed simulation circuit using oxram device compact model cmos design kit simulated variation gain circuit based resistance state oxram shown fig gain control oxram programming found prominent higher operating frequencies bias control implemented potential divider circuit oxram potential divider circuit determines potential across input swept potential across increases fixed output switching voltage also increases thereby controlling bias fig programmable normalization circuit circuit schematic gain variation variation oxram resistance state output eep earning imulations esults simulations proposed architectures dbn sda performed matlab generative networks algorithm behavioral model blocks described section iii simulated stochastic sigmoid neuron activation normalization circuits simulated cadence virtuoso using cmos design kit oxram compact model stacked denoising autoencoder performance analysis trained two autoencoder networks number neurons final encoding layer varying levels depth compared denoising performance see fig network single synaptic weight realized using oxram devices resolution neurons logistic activation except last ten units classification layer linear networks trained reduced mnist dataset images tested denoising new noise corrupted images see fig table roposed autoencoder performance reduced mnist network implementation mse software hybrid oxram sda software hybrid oxram sda table presents learning performance proposed increasing depth network useful current learning algorithm tuning parameters fig deep deep denoising results corrupted mnist images deep belief network performance analysis simulated two deep belief network architectures shown fig layer variants performance network measured testing samples reduced mnist dataset results shown table measured test accuracy using parameters top accuracy correct class corresponds output neuron highest response top accuracy correct class corresponds top output neurons highest response top accuracy correct class corresponds top output neurons highest response table performance simulated hybrid cmosoxram dbn matches closely software based accuracy lower dbn formed rbms significant drop test accuracy dbn rbms acceptable goal greedy training network good state using allow faster convergence thus lower table iii aximum ram switching activity layer sda training device placement max switching activity table aximum ram switching activity layer dbn training fig simulated layer dbn architecture accuracy training deeper network acceptable weights would optimized using table roposed dbn erformance educed mnist device placement max switching activity test accuracy network implementation software hybrid oxram dbn software hybrid oxram dbn tuning voxram amplifying gain sigmoid activation circuits network use gain factor order balance low current values obtained result oxram device resistance values amplification low lead saturation network learn proper reconstruction data necessitates proper tuning amplifier gain effective learning architecture amplifier gain voxram important along standard ones momentum decay rate learning rate etc different consecutive pair layers higher dimensional input layer require lower amplifying gain voxram switching activity analysis proposed architecture resistive switching oxram devices observed following sections architecture synaptic matrix stochastic neuron activation internal neuron state storage rram devices suffer limited cycling endurance million cycles stochastic neuron activation oxram device repeatedly cycled state voltage drop across device used generate stochastic signal fed one comparator inputs thus neuron activation block related switching activity depends number data samples well number epochs maximum switching per device layer estimated using nevents nepochs nsamples nbatch another part architecture oxram device may observe significant number switching events synaptic matrix since interested device endurance consider worst case maximum number hits particular oxram device take entire weight update procedure worst case analysis make following bit encoding synaptic weight exists oxram device switched every single time thus maximum possible number hits device would take synaptic weight update procedure estimated using nswitchevents nbatch nepochs simulated switching activity reduced mnist training neuron layer synaptic matrix shown table iii table corresponding sda dbn architectures respectively key observations summarized increasing depth network increases amount switching hidden layers increasing depth network significant impact switching events synaptic matrix onclusion paper proposed novel methodology realize dgm architectures using type hybrid cmosrram design framework achieve deep generative models proposing strategy stack multiple rbm blocks overall learning rule used study based wise learning back propagation allows network trained good stage rram devices used extensively proposed architecture multiple computing storage actions total rram requirement largest simulated network dbn sda simulated architectures show performance proposed dgm models matches closely software based models layers deep network test accuracy achieved dbn reduced mnist mse sda network endurance analysis shows resonable maximum switching activity future work would focus realizing optimal strategy implement backpropagation proposed architecture enable complete training dgm hybrid dgm architecture acknowledgement research activity suri partially supported department science technology dst government india firp grant authors would like express gratitude chakraborty authors would like thank alibart querlioz hfox device data ppendix code simulations discussed paper available https interested researchers contact authors access code repository eferences pillow simoncelli dimensionality reduction neural models generalization average covariance analysis journal vision vol cunningham byron dimensionality reduction largescale neural recordings nature neuroscience vol hinton zemel autoencoders minimum description length helmholtz free energy advances neural information processing systems bengio lamblin popovici larochelle greedy layerwise training deep networks advances neural information processing systems hinton osindero teh fast learning algorithm deep belief nets neural computation vol raina madhavan deep unsupervised learning using graphics processors proceedings annual international conference machine learning acm kim mcafee mcmahon olukotun highly scalable restricted boltzmann machine fpga implementation field programmable logic applications fpl international conference ieee merolla arthur akopyan imam manohar modha september digital neurosynaptic core using embedded crossbar memory vol stromatias neil pfeiffer galluppi furber liu robustness spiking deep belief networks noise reduced bit precision hardware platforms frontiers neuroscience vol suri bichler querlioz cueto perniola sousa vuillaume gamrat desalvo phase change memory synapse neuromorphic systems application complex visual pattern extraction electron devices meeting iedm ieee international vol december alibart zamanidoost strukov pattern classification memristive crossbar circuits using situ situ training nature communications vol yang strukov stewart memristive devices computing nature nanotechnology vol salvo silicon memories paths innovation john wiley sons jackson rajendran corrado breitwisch burr cheek gopalakrishnan raoux rettner padilla nanoscale electronic synapses using phase change devices acm journal emerging technologies computing systems jetc vol vincent larroque zhao romdhane bichler gamrat klein querlioz spintransfer torque magnetic memory stochastic memristive synapse circuits systems iscas vol wong salahuddin memory leads way better computing nature nanotechnology vol burr shelby sidler nolfo jang boybat shenoy narayanan virwani giacometti experimental demonstration tolerancing neural network synapses using memory synaptic weight element ieee transactions electron devices vol milo pedretti carboni calderoni ramaswamy ambrogio ielmini demonstration hybrid neural networks spike plasticity electron devices meeting iedm ieee international ieee sheri rafique pedrycz jeon contrastive divergence restricted boltzmann machine engineering applications artificial intelligence vol eryilmaz neftci joshi kim brightsky lung lam cauwenberghs wong training probabilistic graphical model resistive switching electronic synapses ieee transactions electron devices vol dec suri parmar exploiting intrinsic variability filamentary resistive memory extreme learning machine architectures ieee transactions nanotechnology vol nov suri parmar kumar querlioz alibart neuromorphic hybrid rbm architecture memory technology symposium nvmts oct wong lee chen chen lee chen tsai rram vol ieee suri querlioz bichler palma vianello vuillaume gamrat desalvo stochastic computing using binary cbram synapses electron devices ieee transactions vol baeumer valenta schmitz locatelli mente rogers sala raab nemsak shim subfilamentary networks cause variability memristive devices acs nano vol jiang huang chen gao liu kang wong design optimization rram using spice model design automation test europe conference exhibition date ieee ielmini resistive switching memories based metal oxides mechanisms reliability scaling semiconductor science technology vol bengio learning deep architectures foundations trends machine learning vol hinton practical guide training restricted boltzmann machines momentum vol vincent larochelle lajoie bengio manzago stacked denoising autoencoders learning useful representations deep network local denoising criterion journal machine learning research glorot bengio understanding difficulty training deep feedforward neural networks proc aistats vol hinton deep belief networks scholarpedia vol huang boureau lecun unsupervised learning invariant feature hierarchies applications object recognition computer vision pattern recognition cvpr ieee conference ieee sutskever hinton learning multilevel distributed representations sequences artificial intelligence statistics taylor hinton roweis modeling human motion using binary latent variables advances neural information processing systems hinton salakhutdinov reducing dimensionality data neural networks science vol pan wilamowski vlsi implementation mode bipolar neuron circuitry neural networks proceedings international joint conference vol balatti ambrogio wang sills calderoni ramaswamy ielmini pulsed cycling operation endurance failure resistive rram electron devices meeting iedm ieee international ieee
9
sympiler transforming sparse matrix codes decoupling symbolic analysis may kazem cheshmi adobe research cambridge kamil michelle mills strout maryam mehri dehnavi university arizona tucson mstrout abstract sympiler code generator optimizes sparse matrix computations decoupling symbolic analysis phase numerical manipulation stage sparse codes computation patterns sparse numerical methods guided input sparsity structure sparse algorithm many simulations sparsity pattern changes little sympiler takes advantage properties symbolically analyze sparse codes apply transformations enable applying transformations sparse codes result code outperforms matrix factorization codes specialized libraries obtaining average speedups eigen cholmod respectively keywords matrix computations sparse methods loop transformations domainspecific compilation shoaib kamil rutgers university piscataway introduction sparse matrix computations heart many scientific applications data analytics codes performance efficient memory usage codes depends heavily use specialized sparse matrix data structures store nonzero entries however compaction done using index arrays result indirect array accesses due indirect array accesses difficult apply conventional compiler optimizations tiling vectorization even static index array operations like sparse matrix vector multiply static index array change algorithm complex operations dynamic index arrays matrix factorization decomposition nonzero structure modified computation making conventional compiler optimization approaches even difficult apply common approach accelerating sparse matrix computations identify specialized library provides manuallytuned implementation specific sparse matrix routine large number sparse libraries available superlu mumps cholmod klu umfpack different numerical kernels supported architectures specific kinds matrices rutgers university piscataway specialized libraries provide high performance must manually ported new architectures may stagnate architectural advances continue alternatively compilers used optimize code providing architecture portability however indirect accesses resulting complex dependence structure run loop transformation framework limitations compiler loop transformation frameworks based polyhedral model use algebraic representations loop nests transform code successfully generate dense matrix kernels however frameworks limited dealing loop bounds array subscripts arise sparse codes recent work extended polyhedral methods effectively operate kernels static index arrays building inspectors examine nonzero structure executors use knowledge transform code execution however techniques limited transforming sparse kernels static index arrays sympiler addresses limitations performing symbolic analysis compute structure remove dynamic index arrays sparse matrix computations symbolic analysis term numerical computing community refers phases determine computational patterns depend nonzero pattern numerical values information symbolic analysis used make subsequent numeric manipulation faster information reused long matrix nonzero structure remains constant number sparse matrix methods cholesky well known viewing computations graph elimination tree dependence graph quotient graph applying graph algorithm yields information dependences used efficiently compute numerical method sparse matrix computation libraries utilize symbolic information couple symbolic analysis numeric computation making difficult compilers optimize codes work presents sympiler generates sparse matrix code fully decoupling symbolic analysis numeric computation transforming code utilize symbolic information obtaining symbolic information running symbolic inspector sympiler applies transformations blocking resulting performance kazem cheshmi shoaib kamil michelle mills strout maryam mehri dehnavi copy rhs dependence graph dgl reachl reachsetsize library implementation decoupled code forward substitution peel col peel col reachsetsize figure four different codes solving linear system four code variants matrix stored compressed sparse column csc format representing matrix order column pointer row index nonzeros respectively dependence graph adjacency graph matrix vertices correspond columns edges show dependencies columns triangular solve vertices corresponding nonzero columns colored blue columns must participate computation due dependence structure colored red white vertices skipped computation boxes around columns show supernodes different sizes forward substitution algorithm library implementation skips iterations corresponding entry zero decoupled code uses symbolic information given reachset computed performing search code peels iterations corresponding columns within nonzeros equivalent libraries sympiler goes existing numerical libraries generating code specific matrix nonzero structure matrix structure often arises properties underlying physical system matrix represents many cases structure reoccurs multiple times different values nonzeros thus code combine transformations produce even efficient code transformations applied sympiler improves performance sparse matrix codes applying optimizations vectorization increased data locality extend improve performance shared distributed memory systems motivating scenario sparse triangular solve takes lower triangular matrix righthand side rhs vector solves linear equation fundamental building block many numerical algorithms factorization direct system solvers rank update methods rhs vector often sparse implementation visits every column matrix propagate contributions corresponding value rest see figure however sparse solution vector also sparse reducing required iteration space sparse triangular solve proportional number nonzero values taking advantage property requires first determining nonzero pattern based theorem gilbert peierls dependence graph matrix nodes edges used compute nonzero pattern matrix rank numerical cancellation neglected nonzero indices given reach set nodes reachable node computed search directed graph staring example dependence graph illustrated figure blue colored nodes correspond set final reach contains colored nodes graph figure shows four different implementations sparse triangular solve solvers assume input matrix stored compressed sparse column csc storage format implementation figure traverses columns typical library implementation shown figure skips iterations corresponding value zero implementation figure shows decoupled code uses symbolic information provided reachset decoupling simplifies numerical manipulation reduces complexity figure figure number floating point operations number nonzeros sympiler goes sympiler transforming sparse codes building leveraging generate code specialized specific matrix structure rhs code shown figure code iterates reached columns peels iterations number nonzeros column greater threshold case figure threshold peeled loops transformed vectorization speed execution shows power fully decoupling symbolic analysis phase code manipulates numeric values compiler aggressively apply conventional optimizations using reachset guide transformation matrices suitesparse matrix collection code shows speedups average compared forward solve code figure average compared code figure static sparsity patterns fundamental concept sympiler built structure sparse matrices scientific codes dictated physical domain change many applications example power system modeling circuit simulation problems sparse matrix used matrix computations often jacobian matrix structure derived interconnections among power system circuit components generation transmission distribution resources numerical values sparse input matrix change often change sparsity structure occurs rare occasions change circuit breakers transmission lines one physical components sparse systems simulations domains electromagentics computer graphics fluid mechanics assembled discretizing physical domain approximating partial differential equation mesh elements sparse matrix method used solve assembled systems sparse structure originates physical discretization therefore sparsity pattern remains except deformations adaptive mesh refinement used sparse matrices many physical domains exhibit behavior benefit sympiler contributions work describes sympiler code generator sparse matrix algorithms leverages symbolic information generate fast code specific matrix structure major contributions paper novel approach building symbolic inspectors obtain information sparse matrix used compilation transformations leverage information transform sparse matrix code specific algorithms implementations symbolic inspectors inspectorguided transformations two algorithms sparse triangular solve sparse cholesky factorization demonstration performance impact code generator showing code outperform libraries triangular solve cholesky factorization respectively sympiler code generator sympiler generates efficient sparse kernels tailoring sparse code specific matrix sparsity structures decoupling symbolic analysis phase sympiler uses information symbolic analysis guide code generation numerical manipulation phase kernel section describe overall structure sympiler code generator well transformations enabled leveraging information symbolic inspector sympiler overview sparse triangular solve cholesky factorization currently implemented sympiler given one numerical methods input matrix stored using compressed sparse column csc format sympiler utilizes symbolic inspector obtain information matrix information used apply optimizations lowering code numerical method addition lowered code annotated additional transformations unrolling applicable based information finally annotated code lowered apply optimizations output source code code implementing numerical solver represented abstract syntax tree ast sympiler produces final code applying series phases ast transforming code phase overview process shown figure initial ast triangular solve shown figure prior transformations symbolic inspector different numerical algorithms make use symbolic information different ways prior work described graph traversal strategies various numerical methods inspectors sympiler based strategies class numerical algorithms symbolic analysis approach sympiler uses specific symbolic inspector obtain information sparsity structure input matrix stores way use later transformation stages classify used symbolic inspectors based numerical method well transformations enabled obtained information combination algorithm transformation symbolic inspector creates inspection graph given sparsity pattern traverses inspection using specific inspection strategy result inspection inspection set contains result running inspector inspection graph inspection sets used guide transformations sympiler additional numerical algorithms transformations added sympiler long required inspectors described manner well kazem cheshmi shoaib kamil michelle mills strout maryam mehri dehnavi sparsity pattern symbolic inspector numerical method transformations code generation transformations initial ast peel prunesetsize vec transformations prunesetsize figure sympiler lowers functional representation sparse kernel imperative code using inspection sets sympiler constructs set loop nests annotates information later used inspectorguided transformations transformations use lowered code inspection sets input apply transformations transformations also provide hints transformations annotating code instance transformation steps code figure initial ast annotated information showing transformations apply symbolic inspector sends pruneset uses add hints case peeling iterations hinted transformations applied final code generated peeling shown iteration zero idx prunesetsize pruneset idx variable iteration space pruning loop pruneset prunesetsize blocksetsize blockset blockset blocking loop blockset blocksetsize figure transformations top loop iteration space transforms loop iteration space prunesetsize use original loop index replaced corresponding value pruneset bottom two nested loops transformed loops blocks sympiler transforming sparse codes motivating example triangular solve used prune loop iterations perform work unnecessary due sparseness matrix right hand side case inspection set inspection strategy perform search inspection graph directed dependency graph triangular matrix example linear system shown figure symbolic inspector generates inputs sparse kernels make blocking optimizations challenging symbolic inspector identifies similar structure sparse matrix methods sparse inputs provide stage blockable sets necessarily size consecutively located blocks similar concept supernodes sparse libraries must deal number challenges block sizes variable sparse kernel due using compressed storage formats block elements may consecutive memory locations type numerical method used may need change applying transformation example applying vsblock cholesky factorization requires dense cholesky factorization diagonal segment blocks segments blocks must updated using set dense triangular solves transformations initial lowered code along inspection sets obtained symbolic inspector passed series passes transform code sympiler currently supports two transformations guided inspection sets variable iteration space pruning blocking applied independently jointly depending input sparsity shown figure code annotated information showing inspectorguided transformations may applied symbolic inspector provides required information transformation phases decide whether transform code based inspection sets given inspection set annotated code transformations occur illustrated figure variable iteration space pruning variable iteration space pruning prunes iteration space loop using information sparse computation iteration space sparse codes considerably smaller dense codes since computation needs consider iterations nonzeros inspection stage sympiler generates inspection set enables transforming unoptimized sparse code code reduced iteration space given inspection set transformation applied particular sparse code transform figure figure figure transformation applied loop nest line transformed code iteration space pruned prunesetsize inspection set size addition new loop references loop index transformation replaced corresponding value inspection set pruneset furthermore transformation phase utilizes inspection set information annotate specific loops optimizations applied later stages code generation annotations guided thresholds decide specific optimizations result faster code running example triangular solve generated inspection set symbolic inspector enables reducing iteration space code transformation elides unnecessary iterations due zeros right hand side addition depending number iterations loops run known thanks symbolic inspector loops annotated directives unroll vectorize code generation blocking blocking converts sparse code set dense subkernels contrast conventional approach dense codes input computations blocked smaller uniform unstructured computations address first challenge symbolic inspector uses inspection strategy provides inspection set specifying size block second challenge transformed code allocates temporary block storage copies data needed prior operating block finally deal last challenge synthesized lowering phase contain information block location matrix applying transformation correct operation chosen transformation also annotates loops transformations tiling applied code generation leveraging specific information matrix applying transformation sympiler able mitigate difficulties applying vsblock sparse numerical methods version transformation shown figures shown new outer loop made provides block information inner loops using given blockset inner loop figure transforms two nested loops lines iterate block specified outer loop diagonal version heavily depends domain information detailed examples applying transformation triangular solve cholesky factorization described section enabled conventional transformations applying transformations original loop nests transformed new loops potentially different iteration spaces enabling application conventional transformations based applied transformations well properties input matrix side vectors code annotated transformation directives example annotations shown figure loop peeling annotated within code decide add annotations transformations use parameters average block size main sources enabling transformations kazem cheshmi shoaib kamil michelle mills strout maryam mehri dehnavi symbolic information provides dependency information allowing sympiler apply transformations peeling based figure transformations remove indirect memory accesses annotate code potential conventional transformations code generation enables sympiler know details loop boundaries thus several customized transformations applied vectorization loops iteration counts greater threshold figure shows iterations triangular solve code peeled example inspection set used created topological order iteration ordering dependencies met thus code correctness guaranteed loop peeling shown figure transformed code viprune annotated enabled peeling transformation based number nonzeros columns column count two selected iterations column count greater peeled replace specialized kernel apply another transformation vectorization case studies column pruneset sparsity pattern row every row pruneset update sqrt diagonal elements figure cholesky sympiler currently supports two important sparse matrix computations triangular solve cholesky factorization section discusses graph theory algorithms used sympiler symbolic inspector extract inspections sets two matrix methods complexity symbolic inspector also presented evaluate inspection overheads finally demonstrate transformations applied using inspection sets table shows classification inspection graphs strategies resulting inspection sets two studied numerical algorithms sympiler shown table symbolic inspector performs set known inspection methods generates sets includes symbolic information last column table shows list transformations enabled transformation also discuss extending sympiler matrix methods sparse triangular solve theory symbolic inspector dependency graph traversed using depth first search dfs determine inspection figure example matrix factor cholesky factorization corresponding elimination tree also shown nodes columns highlighted color belong supernode red nonzeros set transformation case reachset side vector graph also used detect blocks similar sparsity patterns also known supernodes sparse triangular solve contains columns grouped supernodes identified inspecting using node equivalence method node equivalence algorithm first assumes nodes equivalent compares outgoing edges outgoing edges destination nodes two nodes equal merged transformations using viprune limits iteration spaces loops triangular solve operate necessary nonzeros transformation changes loops apply blocking shown figure triangular solve diagonal block columnblock small triangular solve solved first solution diagonal components substituted segment matrix symbolic inspection time complexity dfs graph proportional number edges traversed number nonzeros rhs system time complexity node equivalence algorithm proportional number nonzeros provide overheads methods tested matrices section cholesky factorization cholesky factorization commonly used direct solvers used precondition iterative solvers algorithm factors hermitian positive definite matrix llt matrix lower triangular matrix figure shows example matrix corresponding matrix factorization theory elimination tree etree one important graph structures used symbolic analysis sparse factorization algorithms figure shows corresponding elimination tree factorizing example matrix etree spanning tree satisfying parent min graph filled graph results end elimination process includes edges original matrix well edges discussions theory behind elimination tree elimination process filled graph found sympiler transforming sparse codes table inspection transformation elements sympiler triangular solve cholesky dependency graph rhs sparsity patterns side vector dfs search sparsity patterns coefficient sparsity patterns row unroll loop unrolling peel loop peeling dist loop distribution tile loop tiling transformations inspection graph rhs triangular solve inspection inspection set strategy dfs node equivalence supernodes figure shows sparse cholesky performed two phases update lines column factorization lines update phase gathers contributions already factorized columns left column factorization phase calculates square root diagonal element applies elements find enables transformation row sparsity pattern computed figure shows information used prune iteration space update phase cholesky algorithm since stored column compressed format etree sparsity pattern used determine row sparsity pattern method finding row sparsity pattern row nonzero etree traversed upwards node node reached marked node found visited nodes subtree sympiler uses similar optimized approach find row sparsity patterns supernodes used cholesky found sparsity pattern etree sparsity pattern different created factorization however elimination tree along sparsity pattern used find sparsity pattern prior factorization result memory allocated ahead time eliminate need dynamic memory allocation create supernodes fillin pattern first determined equation based theorem computes sparsity pattern column parent node means exclusion theorem states nonzero pattern union nonzero patterns children etree nonzero pattern column sparsity pattern obtained following rule used merge columns create basic supernodes number nonzeros two adjacent columns regardless diagonal entry equal child two columns merged transformations transformation applies update phase cholesky row sparsity pattern information factorizing column sympiler iterates dependent columns instead columns smaller transformation applies update column inspection graph etree etree colcount cholesky inspection strategy inspection set supernodes enabled dist unroll peel vectorization tile unroll peel vectorization factorization phases therefore outer loop cholesky algorithm figure converted new loop iterates provided references columns inner loops changed blockset diagonal part column factorization dense cholesky needs computed instead square root version resulting factor diagonal elements applies rows sequence dense triangular solves also converts update phase vector operations matrix operations symbolic inspection computational complexity building etree sympiler nearly complexity finding sparsity pattern row proportional number nonzeros row method executed columns results nearly inspection overhead finding includes sparsity detection done nearly supernode detection complexity matrix methods inspection graphs inspection strategies supported current version sympiler support large class commonlyused sparse matrix computations applications elimination tree beyond cholesky factorization method extend commonly used sparse matrix routines scientific applications orthogonal factorization methods incomplete factorized sparse approximate inverse preconditioner computations inspection dependency graph proposed inspection strategies extract supernodes dependency graph fundamental symbolic analyses required optimize algorithms rank update rank increase methods incomplete incomplete cholesky preconditioners implementations factorization algorithms thus sympiler current set symbolic inspectors made support many matrix methods plan extend even larger class matrix methods support optimization methods experimental results evaluate sympiler comparing performance two libraries namely eigen cholmod cholesky factorization method sparse triangular solve algorithm section discusses experimental setup experimental methodology section demonstrate kazem cheshmi shoaib kamil michelle mills strout maryam mehri dehnavi table matrix set matrices sorted based number nonzeros original matrix nnz refers number nonzeros rank matrix sympiler sympiler sympiler eigen problem name cbuckle gyro nnz transformations enabled sympiler generate codes sparse matrix algorithms compared libraries although symbolic analysis performed fixed sparsity pattern sympiler analyze cost symbolic inspector section compare symbolic costs eigen cholmod methodology selected set symmetric positive definite matrices listed table matrices originate different domains vary size matrices real numbers double precision testbed architecture processor cache sizes respectively disabled use dense blas basic linear algebra subprogram routines needed codes compiled gcc using option experiment executed times median reported compare performance code cholmod specialized library cholesky factorization eigen general numerical library cholmod provides one fastest implementations cholesky factorization architectures eigen supports wide range sparse dense operations including sparse triangular solve cholesky thus cholesky factorization compare eigen cholmod results triangular solve compared eigen libraries installed executed using recommended default configuration since sympiler current version support node amalgamation setting enabled cholmod cholesky factorization libraries support commonly used supernodal algorithm also algorithm used sympiler sympiler applies either one transformations well enabled transformations currently sympiler implements unrolling scalar replacement loop distibution among possible transformations triangular solve figure sympiler performance compared eigen triangular solve show performance sympiler numeric code effects transformations sympiler performance shown separately performance generated code section shows combination introduced transformations decoupling strategy enable sympiler outperform two libraries sparse cholesky sparse triangular solve triangular solve figure shows performance sympilergenerated code compared eigen library sparse triangular solve sparse rhs nonzero rhs experiments selected less sparse triangular system solver often used algorithms cholesky rank update methods solver matrix factorizations thus typically sparsity rhs sparse triangular systems close sparsity columns sparse matrix tested problems number nonzeros columns less average improvement code refer sympiler numeric eigen library eigen implements approach demonstrated figure symbolic analysis decoupled numerical code however code manipulates numerical values leads higher performance figure also shows effect transformation overall performance sympilergenerated code current version sympiler symbolic inspector designed generate sets applied experiments show ordering often leads better performance mainly sympiler supports supernodes full diagonal block support transformations added sympiler enable automatically decide best transformation ordering whenever applicable vectorization peeling transformations also applied peeling leads higher performance applied iterations related supernodes peeled vectorization always applied lead performance applied sympiler transforming sparse codes sympiler sympiler eigen numeric cholmod numeric triangular solve time eigen time cholesky figure performance sympiler numeric cholesky compared cholmod numeric eigen numeric shows performance code effect lowlevel transformations shown separately transformation already applied baseline code shown matrices benefit transformation sympiler figure since small supernodes often lead better performance sympiler apply transformation average size participating supernodes smaller parameter currently set applied matrices since average supernode size small thus improve performance also since matrices small column count vectorization payoff cholesky compare numerical manipulation code eigen cholmod cholesky factorization sympilergenerated code results cholmod eigen figure refer numerical code performance floating point operations per second eigen cholmod execute parts symbolic analysis user explicitly indicates sparse matrix used subsequent executions however even input user none libraries fully decouple symbolic information numerical code afford separate implementation sparsity pattern also implement optimizations fairness using eigen cholmod explicitly tell library sparsity fixed thus report time related library numerical code still contains symbolic analysis shown figure cholesky factorization sympiler performs better cholmod eigen respectively eigen uses approach therefore performance scale well large matrices cholmod benefits supernodes thus performs well large matrices large supernodes however cholmod perform well small matrices large matrices small supernodes sympiler provides highest performance almost tested matrix types demonstrates power code generation sympiler numeric sympiler symbolic eigen figure figure shows sparse triangular solve time sympiler eigen runtime normalized eigen time lower better application aggressive optimizations generating code dense enables sympiler generate fast code sparsity pattern since blas routines small dense kernels often perform well small blocks produced applying vsblock sparse codes therefore libraries cholmod perform well matrices small supernodes sympiler luxury generate code dense instead handicapped performance blas routines generates specialized codes small dense average matrix tuned threshold sympiler call blas routines instead since directly specifies number dense triangular solves important dense cholesky average used decide switch blas routines example average matrices less threshold decoupling calculation numerical manipulation phase also improves performance sympilergenerated code discussed subsection sparse cholesky implementation needs obtain row sparsity pattern elimination tree upper triangular part used cholmod eigen find row sparsity pattern since symmetric lower part stored libraries compute transpose numerical code access upper triangular elements fully decoupling symbolic analysis numerical code sympiler row sparsity information ahead time therefore reach function matrix transpose operations removed numeric code symbolic analysis time symbolic analysis performed sympiler generated code manipulates numerical values since symbolic analysis performed specific sparsity pattern overheads amortize repeat executions numerical code however demonstrated figures even numerical code executed common kazem cheshmi shoaib kamil michelle mills strout maryam mehri dehnavi scientific applications accumulated time sympiler close eigen triangular solve faster eigen cholmod cholesky triangular solve figure shows time sympiler spends symbolic analysis sympiler symbolic sparse triangular solve symbolic time available eigen since discussed eigen uses code figure triangular solve implementation figure shows symbolic analysis numerical manipulation time sympiler normalized eigen sympiler numeric plus symbolic time average slower eigen code addition code generation compilation sympiler costs cost numeric solve depending matrix important note since sparsity structure matrix triangular solve change many applications overhead symbolic inspector compilation paid example preconditioned iterative solvers triangular system must solved per iteration often iterative solver must execute thousands iterations convergence since systems scientific applications necessarily cholesky sparse libraries perform symbolic analysis ahead time sparsity patterns improves performance numerical executions compare analysis time libraries sympiler symbolic inspection time figure provides symbolic analysis numeric manipulation times libraries normalized eigen time time spent sympiler perform symbolic analysis referred sympiler symbolic cholmod symbolic eigen symbolic refer partially decoupled symbolic code run user indicates sparsity remains static nearly cases sympiler accumulated time better two libraries code generation compilation shown chart add small amount time costing cost numeric factorization also like triangular solve example matrix fixed sparsity pattern must factorized many times scientific applications example solvers nonlinear systems equations jacobian matrix factorized iteration solvers require tens hundreds iterations converge related work compilers general languages hampered optimization methods either give optimizing sparse codes apply conservative transformations lead high performance due indirection required index loop nonzero elements sparse data structures polyhedral methods limited dealing loop nests subscripts common sparse computations make possible compilers apply aggressive loop data transformations sparse codes recent work developed techniques automatically creating inspectors executors use techniques use inspector analyze index arrays sparse codes executor uses information execute code specific optimizations techniques cholesky time eigen time sympiler numeric sympiler symbolic eigen numeric eigen symbolic cholmod numeric cholmod symbolic figure figure shows time sympiler cholmod eigen cholesky algorithm times normalized eigen accumulated time lower better limited apply sparse codes static index arrays codes require matrix structure change computation aforementioned approach performs well methods sparse incomplete methods additional introduced computation however large class sparse matrix methods direct solvers including cholesky decompositions index arrays dynamically change computation since algorithm introduces addition indirections dependencies sparse direct solvers tightly coupled algorithm making difficult apply techniques compilers integrate domain knowledge compilation process improving compiler ability transform optimize specific kinds computations approach used successfully stencil computations signal processing dense linear algebra matrix assembly mesh analysis simulation sparse operations though simulations sparse compilers use knowledge matrix structure optimize operations build specialized matrix solvers specialized libraries typical approach sparse direct solvers libraries differ numerical methods implemented implementation strategy variant solver type platform supported whether algorithm specialized specific applications numerical method suitable different classes matrices example cholesky factorization requires matrix symmetric hermitian positive definite libraries superlu klu umfpack eigen provide optimized implementations decomposition methods cholesky factorization available libraries eigen csparse cholmod mumps pardiso factorization implemented sparspak splooes eigen csparse optimizations algorithm variants used implement sparse matrix methods differ libraries example decomposition implemented using sympiler transforming sparse codes multifrontal methods rightlooking methods libraries developed support different platforms sequential implementations shared memory distributed memory finally libraries designed perform well matrices arising specific domain example klu works best circuit simulation problems contrast superlumt applies optimizations assumption input matrix structure leads large supernodes strategy poor fit circuit simulation problems conclusion paper demonstrated decoupling symbolic analysis numerical manipulation enable generation domainspecific sparse codes static sparsity patterns sympiler proposed code generator takes sparse matrix pattern sparse matrix algorithm inputs perform symbolic analysis uses information symbolic analysis apply number transformations sparse code sympilergenerated code outperforms two sparse libraries eigen cholmod sparse cholesky sparse triangular solve algorithms references martin anders logg kristian marie rognes garth wells unified form language language weak formulations partial differential equations acm transactions mathematical software toms patrick amestoy iain duff excellent multifrontal parallel distributed symmetric unsymmetric solvers computer methods applied mechanics engineering patrick amestoy iain duff excellent jacko koster fully asynchronous multifrontal solver using distributed dynamic scheduling siam matrix anal appl patrick amestoy abdou guermouche pralet hybrid scheduling parallel solution linear systems parallel computing corinne ancourt irigoin scanning polyhedra loops acm sigplan notices vol acm dale arden anderson john tannehill richard pletcher computational fluid mechanics heat transfer cleve ashcraft roger grimes spooles sparse matrix library ppsc michele benzi jane cullum miroslav tuma robust approximate inverse preconditioning conjugate gradient method siam journal scientific computing gilbert louis bernstein chinmayee shah crystal lemire zachary devito matthew fisher philip levis pat hanrahan ebb dsl physical simulation cpus gpus acm trans graph article may pages doi https chun chen polyhedra scanning revisited acm sigplan notices yanqing chen timothy davis william hager sivasankaran rajamanickam algorithm cholmod supernodal sparse cholesky factorization acm transactions mathematical software toms timothy davis algorithm umfpack unsymmetricpattern multifrontal method acm transactions mathematical software toms timothy davis algorithm concise sparse cholesky factorization package acm transactions mathematical software toms timothy davis direct methods sparse linear systems vol siam timothy davis algorithm suitesparseqr multifrontal multithreaded sparse factorization acm transactions mathematical software toms timothy davis algorithm factorize linear system solver matlab acm transactions mathematical software toms timothy davis william hager row modifications sparse cholesky factorization siam matrix anal appl timothy davis william hager dynamic supernodes sparse cholesky triangular solves acm transactions mathematical software toms timothy davis yifan university florida sparse matrix collection acm transactions mathematical software toms timothy davis ekanathan palamadai natarajan algorithm klu direct sparse solver circuit simulation problems acm transactions mathematical software toms ailson moura adriano aron moura power flow constant matrices comparison decoupled power flow methods international journal electrical power energy systems james demmel stanley eisenstat john gilbert xiaoye joseph liu supernodal approach sparse partial pivoting siam matrix anal appl james demmel john gilbert xiaoye asynchronous parallel supernodal algorithm sparse gaussian elimination siam matrix anal appl richard dorf electronics power electronics optoelectronics microwaves electromagnetics radar crc press iain duff nick gould john reid jennifer scott kathryn turner factorization sparse symmetric indefinite matrices ima numer anal iain duff john reid multifrontal solution indefinite sparse symmetric linear acm transactions mathematical software toms iain duff john ker reid design code direct solution sparse unsymmetric linear systems equations acm transactions mathematical software toms yousef maryam mehri dehnavi warren gross dennis giannacopoulos parallel finite element technique using gaussian belief propagation computer physics communications complete doi https gonzalo fernandez gonzalez pablo padilla torre manuel dual circular polarized steering antenna satellite communications band progress electromagnetics research alan george joseph liu design user interface sparse matrix package acm transactions mathematical software toms alan george joseph liu computer solution large sparse positive definite prentice hall professional technical reference alan george joseph liu computer solution large sparse positive definite sarah gibson brian mirtich survey deformable modeling computer graphics technical report citeseer john gilbert tim peierls sparse partial pivoting time proportional arithmetic operations siam sci statist comput nicholas gould jennifer scott yifan numerical evaluation sparse direct solvers solution large sparse symmetric linear systems equations acm transactions mathematical software toms guennebaud benoit jacob eigen url http tuxfamily org john gunnels fred gustavson greg henry robert van geijn flame formal linear algebra methods environment acm transactions mathematical software toms anshul gupta george karypis vipin kumar highly scalable parallel algorithms sparse matrix factorization ieee transactions parallel distributed systems justin holewinski pouchet sadayappan highperformance code generation stencil computations gpu architectures proceedings acm international conference supercomputing ics acm new york usa doi https carlo janna massimiliano ferronato giuseppe gambolati use supernodes factored sparse approximate inverse preconditioning siam journal scientific computing wayne kelly optimization within unified transformation framework david kershaw incomplete gradient method iterative solution systems linear equations comput phys kazem cheshmi shoaib kamil michelle mills strout maryam mehri dehnavi fredrik kjolstad shoaib kamil jonathan david levin shinjiro sueda desai chen etienne vouga danny kaufman gurtej kanwar wojciech matusik saman amarasinghe simit language physical simulation acm transactions graphics tog xiaoye overview superlu algorithms implementation user interface acm transactions mathematical software toms xiaoye james demmel scalable distributedmemory sparse direct solver unsymmetric linear systems acm transactions mathematical software toms joseph liu role elimination trees sparse factorization siam matrix anal appl doi https zevallos luna alexandre siligaris pujol laurent dussopt packaged ghz transceiver integrated antennas communications radio wireless symposium fabio luporini david ham paul kelly algorithm optimization finite element integration loops arxiv preprint maxim naumov parallel cholesky factorization preconditioned iterative methods gpu nvidia technical report papadrakakis bitoulas accuracy effectiveness preconditioned conjugate gradient algorithms large problems computer methods applied mechanics engineering roger pawlowski john shadid joseph simonis homer walker globalization techniques methods applications fully coupled solution equations siam review alex pothen sivan toledo elimination structures scientific computing markus moura jeremy johnson david padua manuela veloso bryan singer jianxin xiong franz franchetti aca gacic yevgen voronenko kang chen robert johnson nicholas rizzolo spiral code generation dsp transforms proceedings ieee special issue program generation optimization adaptation fabien sanjay rajopadhye doran wilde generation efficient nested loops polyhedra international journal parallel programming jonathan connelly barnes andrew adams sylvain paris durand saman amarasinghe halide language compiler optimizing parallelism locality recomputation image processing pipelines acm sigplan notices hongbo rong jongsoo park lingxiang xiang todd anderson mikhail smelyanskiy sparso optimizations sparse linear algebra proceedings international conference parallel architectures compilation acm olaf schenk klaus solving unsymmetric sparse systems linear equations pardiso future generation computer systems olaf schenk klaus wolfgang fichtner efficient sparse factorization looking strategy shared memory multiprocessors bit numerical mathematics kai shen tao yang xiangmin jiao efficient sparse factorization parallel machines siam matrix anal appl andrew sherman algorithms sparse gaussian elimination partial pivoting acm transactions mathematical software toms jaewook shin mary hall jacqueline chame chun chen paul fischer paul hovland speeding autotuning specialization proceedings acm international conference supercomputing acm daniele spampinato markus basic linear algebra compiler proceedings annual international symposium code generation optimization acm michelle mills strout alan lamielle larry carter jeanne ferrante barbara kreaseck catherine olschanowsky approach code generation sparse polyhedral framework parallel comput yuan tang rezaul alam chowdhury bradley kuszmaul luk charles leiserson pochoir stencil compiler proceedings annual acm symposium parallelism algorithms architectures acm ananta tiwari chun chen jacqueline chame mary hall jeffrey hollingsworth scalable framework compiler optimization parallel distributed processing ipdps ieee international symposium ieee harmen van der spek harry wijshoff sublimation expanding data structures enable data instance specific optimizations international workshop languages compilers parallel computing springer nicolas vasilache bastoul albert cohen polyhedral code generation real world international conference compiler construction springer anand venkat mary hall michelle strout loop data transformations sparse matrix code proceedings acm sigplan conference programming language design implementation acm anand venkat mahdi soltan mohammadi jongsoo park hongbo rong rajkishore barik michelle mills strout mary hall automating wavefront parallelization sparse matrix computations proceedings international conference high performance computing networking storage analysis ieee press anand venkat manu shantharam mary hall michelle mills strout extensions polyhedral code generation proceedings annual international symposium code generation optimization acm xianyi openblas optimized blas library
6
jun tackling exascale software challenges molecular dynamics simulations gromacs carsten mark james berk erik feb science life laboratory stockholm uppsala stockholm den dept theoretical physics kth royal institute technology stockholm sweden theoretical computational biophysics max planck institute biophysical chemistry fassberg germany center biomembrane research dept biochemistry biophysics stockholm university stockholm sweden address correspondence erik lindahl final publication available http authors contributed equally abstract gromacs widely used package biomolecular simulation last two decades evolved efficiency advanced heterogeneous acceleration parallelism targeting largest supercomputers world describe ways able realize use parallelization levels combined constant focus absolute performance release gromacs uses simd acceleration wide range architectures gpu offloading acceleration openmp mpi parallelism within nodes respectively recent work acceleration made necessary revisit fundamental algorithms molecular simulation including concept neighborsearching discuss present future challenges see exascale simulation particular task parallelism also discuss software management code peer review continuous integration testing required project complexity introduction molecular dynamics simulation biological macromolecules evolved narrow method widely applied biophysical research tool used outside theoretical chemistry supercomputers important centrifuges test tubes chemistry however success also considerably raises bar molecular simulation implementations longer sufficient reproduce experimental results show relative scaling justify substantial supercomputing resources required many computational chemistry projects important focus today simply absolute simulation performance scientific results achieved exascale computing potential take simulation new heights combination challenges face software preparing deployment exascale deliver results unique history software days simply buying new hardware faster clock rate getting shorter times solution old software gone days running applications single core gone days heterogeneous processor design suit computation back days performance bounded time taken computations ending fast need design parallelization mind points stay also means amdahl law relevant particular challenge biomolecular simulations computational amdahl law gives model expected maximum speedup program parallelized multiple processors respect serial version states achievable speedup limited sequential part program problem size fixed geometric size protein resolution model physics life science problems reduced size smaller possible simulate much larger systems typically relevant second timescale dynamics involving entire system increases much faster length scale due requirement sampling exponentially larger number ensemble microstates means weak scaling largely irrelevant life science make use increasing amounts computational resources simulate systems rely either software engineering techniques ensemble simulation techniques fundamental algorithm molecular dynamics assigns positions velocities every particle simulation system specifies model physics governs interactions particles forces computed used update positions velocities via newton second law using given finite time step numerical integration scheme iterated large number times generates series samples thermodynamic ensemble defined model physics samples observations made confirm predict experiment typical model physics many components describe different kinds bonded interactions exist interactions particles model behaviour like van der waals forces coulomb law interactions expensive aspects computing forces subject large amount research computation optimization historically gromacs molecular dynamics simulation suite aimed tool studying biomolecular systems shown fig development simulation engine focused heavily maximizing performance innermost compute kernels interactions kernels typically compute electrostatic van der waals forces acting simulation particle interactions inside given spherical boundary kernels first written fortran later optimized assembly language mostly commodity processors data dependencies computations kernels challenging fortran compilers kernels also specialized interactions within water molecules prevalence interactions biomolecular simulations one extensive use kernels seen software equivalent applicationspecific integrated circuits recognizing need build upon good work coupling multiple processors gromacs introduced neutral territory algorithm fully dynamic load balancing spatial decomposition simulation volume created figure typical gromacs simulation system featuring membrane protein glic colored embedded lipid membrane grey whole system solvated water shown giving total around atoms image created vmd data parallelism effective scaling computation around atoms per core implementation required use mpi parallel constructs however needs many simulation users met within single node context implementation overhead mpi libraries high mention difficult employ distributed computing gromacs implemented multithreaded mpi library necessary subset mpi api library posix windows threads hence called uses highly efficient atomic synchronization primitives allows existing implementation work across multiple cores single node without depending external mpi library however fundamental limitation remained mapping mpi ranks cores domains one hand always limit small spatial domain limit number domains simulation box decomposed turn limits number cores parallelization mapping utilize hand domains cores mapping creates independent data sets cores sharing caches act without conflict size volume data must communicated neighboring domains act coherently grows rapidly number domains approach scalable fixed problem size latency communication cores comparable communication overhead grows linearly number cores neither true network latencies orders magnitude higher latencies clearly major problem designing exascale many cores many nodes memory communication latencies key attributes important aspect target simulations designing strong scaling treating components atomic interactions many systems interest spatially heterogeneous nanometer scale proteins embedded membranes solvated water simulation artefacts caused failing treat effects well known facto standard treating electrostatic interactions become smooth ewald pme method whose cost atoms scales log straightforward implementation rank parallel computation participates equivalent way leads fast fourier transform fft communicates globally communication quickly limits strong scaling mitigate gromacs introduced mpmd implementation dedicates ranks fft part ranks fft communication gromacs improved using pencil decomposition reciprocal space within mpmd implementation task parallelism works well machines homogeneous hardware harder port accelerators combine rdma constructs transformation gromacs needed perform well parallel hardware began gromacs requires radical algorithm changes better use parallelization constructs ground afterthought hands required steer project yet old functionality written time must generally preserved computer architectures evolving rapidly single developer know details following sections describe addressing challenges ongoing plans addressing others handling exascale software challenges algorithms parallelization schemes parallelism modern computer hardware parallel exposes multiple levels parallelism depending type speed data access communication capabilities across different compute elements modern superscalar cpu intel haswell even single core equipped different execution ports even possible buy chip add hardware threads complex communication crossbars memory hierarchies caches larger hard disks results complex hierarchical organization compute elements simd units caches network topologies level hierarchy requiring different type software parallelization efficient use hpc codes traditionally focused two levels parallelism codes typically rely solely mpi parallelization target parallelism multiple levels approach obvious advantages heterogeneous computing era improvements came cpu frequency scaling evolution interconnect however nowadays scientific problems require complex parallel software architecture able use petaflop hardware efficiently going toward exascale becoming necessity particularly true molecular dynamics requires reducing per iteration improve simulation performance lowest level processors typically contain simd single instruction multiple data units offer silicon dedicated executing limited set instructions multiple currently typically data elements simultaneously exploiting finegrained parallelism become crucial achieving high performance especially new architectures like avx intel mic supporting wide simd one level higher cpus become standard several architectures support multiple hardware threads per core hence typical smp machines come dozens cores capable running threads simultaneous smt support simply running multiple processes mpi ranks core hardware thread typically less efficient achieving strong scaling molecular dynamics requires efficient use cache hierarchy makes picture even complex hand chip considered homogeneous cluster either accelerator coprocessors like gpus intel mic often referred add another layer complexity parallelism require parallelism carefully tuned data access patterns well special programming models current accelerator architectures like gpus also add another layer interconnect form pcie bus peripheral component interconnect express well separate main memory means data movement across pcie link often limits overall throughput integration traditional cpu cores cores like gpus mic accelerators ongoing cost data movement different units least foreseeable future factor needs optimized typical hpc hardware exhibits memory access numa behavior node level accessing data different cpus cores cpus cost started multithreading trials quite early idea easily achieving load balancing simultaneous introduction numa suddenly meant processor resembled cluster internally indiscriminately accessing memory across numa nodes frequently lead performance lower mpi moreover numa behavior extends compute communication components cost communicating accelerator network interface typically depends bus topology requires special attention top level interconnect links together compute nodes network topology evolution network capacity latency bandwidth per compute node improved typical number cpu cores serve increased faster capacity available per core decreased substantially order exploit capabilities level hardware parallelism application needs consider multiple levels parallelism simd parallelism maximizing performance multithreading exploit advantages smt fast data sharing parallelism message passing mpi heterogeneous parallelism utilizing cpus accelerators like gpus driven evolution hardware initiated parallelization gromacs particular recent efforts focused improvements targeting levels parallelization new algorithms wide simd accelerator architectures portable extensible simd parallelization framework efficient throughout entire code asynchronous accelerators resulting parallelization scheme implemented gromacs illustrated fig following sections give overview improvements highlighting advances provide terms making efficient use current petascale hardware well paving road towards exascale computing simd parallelism modern cpu gpu architectures use instructions achieve high flop rates computational code aims high performance make use simd regular work multiplications compiler generate good simd code although manually tuned vendor libraries typically even better irregular work shortrange interactions compiler usually fails since control data structures think compiler really good optimizing experience look raw assembly instructions actually generated gromacs reluctantly recognized decade ago sse altivec simd kernels written manually assembly kernels still extremely efficient interactions involving water molecules interactions parallelize well simd using standard approach unrolling clear different approach needed order use wide simd execution units like avx gpus developed novel approach particles interactions search bonded interactions integration forces simd operations openmp threads cpu cores serial fft library nvidia gpus nodes solve mpi ranks pme nodes openmp threads reader audio phone mic card sata usb eject power menu esc tab select scrolllock printscrn sysrq pause numlock break insert home scrolllock capslock numlock pageup delete end pagedown shift home capslock pgup end pgdn ins enter del ctrl alt alt shift ctrl workstations ensembles simulations compute clusters simulations figure illustration parallelism gromacs exploits several kinds finegrained data parallelism mpmd decomposition separating shortrange particle mesh ewald pme force calculation algorithms coarsegrained data parallelism mpi ranks implemented either singlenode workstations compute clusters ensembles related simulations scheduled distributed computing controller grouped spatial clusters containing fixed number particles first particles placed grid dimensions binned dimension efficiently groups particles close space permits construction list clusters containing exactly particles list constructed cluster pairs containing particles may close enough interact list pairs interacting clusters reused multiple successive evaluations forces list constructed buffer prevent particle diffusion corrupting implementation model physics kernels implement computation interactions two clusters use simd load instructions fill vector registers copies positions particles loop particles unrolled according simd width cpu inside loop simd load instructions fill vector registers positions particles cluster permits computation interactions particles simultaneously computation interactions inner loop without needing load particle data wide simd units efficient process one cluster time number clusters process adjusted suit underlying characteristics hardware using recovers original algorithm cpus gromacs uses depending simd width nvidia gpus use calculate interactions hardware threads executing lockstep improve ratio arithmetic memory operations using gpus add another level hierarchy grouping clusters together thus store particles shared memory calculate interactions half every particle list kernel implementations reach peak flop rate supported hardware high comes cost calculating twice many interactions required particle pairs cluster pairs within time step many interactions computed known produce zero result extra zero interactions actually put use effective additional pair list buffer additionally standard verlet list buffer shown scheme flexible since adapted current future hardware algorithms optimization tricks developed pair lists reused list although many improve performance current implementation algorithm already supports wide range simd instruction sets accelerator architectures avx fma qpx intel mic lrbni nvidia cuda implementation gate array fpga architecture progress parallelism gromacs relied mostly mpi intranode parallelization cpu cores worked well since little data communicate medium high parallelization data fits cache initial plans support openmp parallelization separate particle mesh ewald pme mpi ranks reason using openmp pme reduce number mpi ranks involved costly collective communication fft grid transpose part code involves global data dependencies although indeed greatly reduced mpi communication cost also introduced significant overhead gromacs designed use openmp parts algorithms straightforward parallelize using openmp scale well fig shows parts code like performing domain decomposition integrating forces velocities show slightly worse scaling moreover scaling parts tends deteriorate increasing number threads mpi rank especially time decision sharing gpu among multiple mpi ranks inefficient efficient way use multiple cores node openmp within rank constraint since relaxed large number threads rank teams openmp threads cross numa boundaries simulating high ratios step take little microsecond many openmp barriers used many code paths parallelized openmp takes microseconds costly openmp mpi two ranks performance cores figure comparison simulation performance using mpi openmp combined parallelization openmp blue achieves highest performance near linear scaling threads deteriorates threads openmp regions need communicate across system bus contrast runs red require less communication scale well across sockets combining mpi openmp parallelization two ranks varying number threads green results worse performance due added overhead two parallelization schemes simulations carried node intel xeon ghz sandy bridge input system rnase protein solvated rectangular box atoms pme electrostatics accordingly hybrid mpi openmp parallelization often slower scheme fig illustrates since ranks local communication reduction mpi communication using hybrid scheme apparent high parallelization mpionly parallelization gromacs puts hard upper limit number cores used due algorithmic limits spatial domain size need communicate one nearest neighbor hybrid scheme cores operate spatial domain assigned mpi rank longer hard limit parallelization strong scaling curves extend much gradual loss parallel efficiency example given fig shows membrane protein system scaling twice many cores hybrid parallelization reach double peak performance gromacs cases parallelization much faster parallelization load stage force computation balanced individually typical example solute solvent solute bonded interactions solvent openmp bonded interactions distributed equally threads straightforward manner mpi mpi openmp mpi openmp mpi cores figure improvements strong scaling performance since gromacs using kernels openmp parallelization gromacs plot shows simulation performance different software versions parallelization schemes performance one core per mpi rank shown gromacs purple black performance gromacs shown using two red four green cores per mpi rank using openmp threading within mpi rank simulations carried triolith cluster nsc using two intel ghz sandy bridge processors per node fdr infiniband network test system glic membrane protein shown fig atoms pme electrostatics heterogeneous parallelization heterogeneous architectures combine multiple types processing units typically cores often cpus accelerators like gpus intel mic fpgas accelerator architectures become increasingly popular technical scientific computing mainly due impressive raw floating point performance however order efficiently utilize architectures high level parallelism required massively parallel nature accelerators particular gpus advantage well burden programmer since tasks well suited execution accelerators often leads additional challenges workload distribution load balancing moreover current heterogeneous architectures typically use slow pcie bus connect hardware elements like cpus gpus move data separate global memory means explicit data management required adds latency overhead challenge algorithms like already face parallelization gpu accelerators first supported experimentally gromacs openmm library used black box execute entire simulation gpu meant fraction diverse set gromacs algorithms supported simulations limited singlegpu use additionally openmm offered good performance implicitsolvent models common type runs showed little speedup cases slowdown fast performance cpus thanks highly tuned simd assembly kernels experience set provide native gpu support macs important design principles mind building observation highly optimized cpu code hard beat goal ensure compute resources available cpu accelerators utilized greatest extent possible also wanted ensure heterogeneous gpu acceleration supported existing features gromacs single code base avoid reimplement major parts code execution means suitable parallelization offload model codes also employed successfully fig illustrates aim execute force calculation gpu accelerators cpu computes bonded electrostatics forces latter communication intensive newly designed algorithm evaluating interactions accelerator architectures mind discussed already highly efficient expressing parallelism present calculation additionally atom approach designed data reuse emphasized supercluster grouping result cuda implementation characterized high ratio arithmetic memory operations allows avoiding memory bottlenecks algorithmic design choices extensive performance tuning led strongly bound cuda kernels contrast traditional gpu algorithms reported memory bound cuda gpu kernels also scale well reaching peak throughput already around particles per gpu contrast typical programming homogeneous machines heterogeneous architectures require additional code manage task scheduling concurrent execution different compute elements cpu cores gpus present case complex component heterogeneous parallelization implements main goal maximizing utilization cpu gpu ensuring optimal execution overlap combine set cpu cores running openmp threads gpu shown fig required computation prepared cpu transferred gpu pruning step carried lists reused iterations extreme floatingpoint power gpus makes feasible use much larger buffers required transfer coordinates charges forces well compute kernels launched asynchronously soon data becomes available cpu ensures overlap cpu gpu computation additional effort gone maximizing overlap reducing nonoverlapping program parts simd parallelization pair search constraints efficient algorithms allowing gromacs achieve typical overlap scheme naturally extends multiple gpus using existing cpu gpu cuda pair search transfer list domain decomp transfer bonded step pair search every iterations openmp threads offload pruning pme integration transfer constraints figure gromacs heterogeneous parallelization using cpu gpu resources simulation interactions offloaded gpu cpu responsible bonded force calculation lattice summation algorithms diagram shows tasks carried normal step black arrows well step includes additional tasks carried blue arrows latter shown blue also includes additional transfer subsequent pruning pair list part kernel source http reused retrieved march gmt ficient implemented using mpi parallelization default assign computation domain single gpu set cpu cores typically means decomposing system many domains gpus used running many mpi ranks per node gpus node however often require run large number openmp threads rank even single gpu per node potentially spanning across multiple numa domains explained previous section lead suboptimal scaling especially affecting algorithms outside overlap region avoid multiple mpi ranks share gpu reduces number openmp threads per rank heterogeneous acceleration gromacs delivers speedup comparing cpu runs moreover advanced features like arbitrary simulation box shapes virtual interaction sites supported fig even though overhead managing accelerator gromacs shows great strong scaling gpu accelerated runs reaching common simulation systems fig based similar parallelization design upcoming gromacs version also support intel mic accelerator architecture intel mic supports native execution standard mpi codes using symmetric mode card essentially treated node however mic highly parallel architecture requiring parallelism many parts typical mpi codes inefficient processors hence efficient utilization xeon phi devices molecular dynamics especially typical simulations mind possible treating accelerators similarly gpus means parallelization scheme based offloading tasks suitable wide simd highly execution mic ensemble simulations performance scaling advances gromacs many programs made efficient run simulations simply large years ago however infrastructures european prace provide access problems scale thousands cores used impossible barrier biomolecular dynamics anything ridiculously large systems implementation could run well hundreds particles per core scaling improved number computational units supercomputers growing even faster multiple machines world reach roughly million cores ideal conditions gromacs scale levels rank handles atoms concrete biological problems require million atoms without corresponding increases number samples generated even theoretical case xeon xeon xeon xeon cubic box cubic box npt dodec box dodec box vsites cubic cubic figure important feature current heterogeneous gromacs gpu implementation works works efficiently combination features software gpu simulations employ domain decomposition boxes pressure scaling virtual interaction sites significantly improve absolute simulation performance compared baseline simulation system rnase protein solvated rectangular atoms rhombic dodecahedron atoms box pme electrostatics cutoff hardware intel xeon ghz westmere nvidia tesla fermi gpu accelerators figure strong scaling gromacs hydra heterogeneous machine garching germany grey lines indicate linear scaling hybrid version gromacs scales well achieves impressive absolute performance small large systems smaller systems peak performance achieved atoms per core larger systems achieve sustained effective flop rate petaflops counting number useful operations total simulation systems typical production systems atoms circles atoms stars atoms triangles size hardware per node xeon ghz ivy bridge nvidia infiniband network could improve scaling point core contains single atom simulation system would still almost order magnitude larger example fig adapt reality researchers increasingly using large ensembles simulations either simply sample better new algorithms replica exchange simulation markov state models milestoning analyze exchange data multiple simulations improve overall sampling many cases achieves much superscaling ensemble simulations running nodes might provide sampling efficiency single simulation running cores automate gromacs new framework parallel adaptive molecular dynamics called copernicus given set input structures sampling settings framework automatically starts first batch sampling runs makes sure simulations complete extensive support checkpointing restarting failed runs automatically performs adaptive step data analysis decide new simulations start second generation current ensemble sampling algorithms scale hundreds thousands parallel simulations using thousands cores even small systems first time many years molecular dynamics might actually able use cores available supercomputers rather constantly generation two behind load balancing achieving strong scaling higher core count problem requires careful consideration load balance advantage provided spatial one data locality reuse distribution computational work homogeneous care needed typical membrane protein simulation dominated water usually treated rigid model lipid membrane whose alkyl tails modeled particles zero partial charge bonds constrained length protein modeled backbone bonds require lengthy series constraint calculations well partial charge particles problems well known addressed gromacs scheme via automatic dynamic load balancing distributes spatial volumes unevenly according observed imbalance compute load approach limitations works level domains must map mpi ranks cores within node socket unnecessary copies data yet succeeded developing highly effective decomposition work multiple cores hope address via task parallelism one advantage pme algorithm implemented gromacs possible shift computational workload reciprocalspace parts algorithm makes possible write code run optimally different settings different kinds hardware performance compute communication bookkeeping parts overall algorithm vary greatly characteristics hardware implements properties simulation system studied example shifting compute work reciprocal real space make better use idle gpu increases volume must communicated lowering required communication volume ffts evaluating best manage compromises happen runtime mpmd version pme intended reduce overall communication cost typical switched networks minimizing number ranks participating ffts requires generating mapping pme ranks scheduling data transfer however hardware relatively efficient implementations global communication advantageous prefer spmd implementation regular communication patterns may true architectures accelerators mpmd implementation makes use accelerators pme ranks performance implementations limited lack overlap communication computation attempts use partitioned global address space pgas methods require spmd approaches particularly challenged gain decrease communication latency must also overcome overall increase communication accompanies transition advent implementations collective nbc mpi routines promising computation found overlap background communication straightforward approach would revert spmd hope increase total communication cost offset gain available compute time however available performance still bounded overall cost global communication finding compute overlap nbc mpmd pme ranks likely deliver better results permitting pme ranks execute kernels bonded nonbonded interactions associated ranks straightforward way achieve overlap particularly true scaling limit presence bonded interactions one primary problems balancing compute load ranks introduction automatic ensemble computing introduces another layer decomposition essentially achieve msmpmd parallelism multiplesimulation ensemble space data domain decomposition managing contributions exascale promising candidate biomolecular simulations use suitable implementations methods exafmm least one implementation molecular dynamics running cores reported far throughput problems comparable size equivalent best implementations algorithms deliver linear scaling communication computation number mpi ranks number particles linear scaling expected advantage increasing number processing units exascale era early tests showed iteration times exafmm work gromacs work homogeneous systems size comparable hope deploy working version future task parallelism exascale plan address problems mentioned use task parallelism currently possible gromacs considerable technical challenges remain convert loop constructs series tasks coarse enough avoid spending lots time scheduling work yet fine enough balance overall load initial plan experiment thread building blocks tbb library coexist openmp deploy equivalent loop constructs early phases development many alternatives exist require use custom compilers runtime environments language extensions unattractive increases number combinations algorithm implementations must maintained tested compromises high portability enjoyed gromacs one particular problem might alleviated task parallelism reducing cost communication required integration phase polymers protein backbones modeled bonds least two bonds per particle leads coupled constraints domain decomposition spreads multiple ranks iterating satisfy constraints costly part algorithm high parallelism spatial regions contain bonded interactions distributed many ranks constraint computations begin forces atoms computed current implementation waits forces ranks computed starting integration phase performance phase bounded latency multiple communication stages required means ranks lack atoms coupled bonded interactions water molecules literally nothing stage ideal implementation ranks could contribute early iteration complete tasks needed forces atoms involved coupled bond constraints integration atoms could take place forces interactions unrelated atoms computed computation nodes communication constraint iteration takes place kind implementation would require considerably flexibility execution model simply present today handling exascale software challenges process infrastructure transition major part gromacs code base around million lines code since version http ideally software engineering moderately large code bases would take place within context effective abstractions example someone developing new integration algorithm need pay attention whether parallelization implemented constructs threading library like posix threads threading layer like openmp external library like mpi remote direct memory access like shmem equally need know whether kernels compute forces using inputs running particular kind accelerator cpu implementing abstractions generally costs developer time compute time necessary evils software able change new hardware new algorithms new implementations emerge considerable progress made modularizing aspects code base provide effective abstraction layers example main iteration loop begun programmer need know whether mpi layer provided external library computation taking place multiple nodes internal implementation working parallelize computation single node portable abstract atomic operations available development integrators receive vectors positions velocities forces without needing know details kernels computed forces dozens kernels make portable simd function calls compile correct hardware operations automatically however size function implements loop time steps remained code comment lines since remains riddled conditions comments function calls different parallelization conditions integration algorithms optimization constructs housekeeping communication output ensemble algorithms function computes forces even worse old new kernel infrastructures supported code complexity necessary tool like gromacs however needing aware dozens irrelevant possibilities heavy barrier participation project difficult understand side effects change address process transition much control code remain alert possibility hpc compilers effective compiling impact execution time code negligible impact developer time considerable expectation use virtual function dispatch eliminate much complexity understanding conditional code including switch statements enumerations must updated widely scattered parts code despite slightly slower implementation actual function call gromacs long used custom implementation dispatch interaction kernels objects managing resources via raii exploiting destructor calls right thing lead shorter development times fewer problems developers manage fewer things templated container types help alleviate burden manual memory allocation deallocation existing testing mocking libraries simplify process developing adequate testing infrastructure existing support libraries intel tbb beneficial true objectives could met prospect tedious tasks compiler attractive best practices scientific software development version control widely considered necessary successful software development gromacs used cvs early days uses git git clone git ability trace behavior changed find metadata might changed supremely valuable coordinating information desires users developers known problems progress current work ongoing task difficult development team scattered around world thousands users rarely meet gromacs uses redmine discuss feature development report discuss bugs monitor intended actual progress towards milestones commits git repository expected reference redmine issues appropriate generates automatic html save people time finding information peer review scientific research accepted gold standard quality need specialist understanding fully appreciate value criticize improve work software development projects like gromacs comparably complex experience peer review worked well specifically proposed changes gromacs even core authors must gerrit receive positive reviews least two developers suitable experience merged documentation must part change requiring review happen acceptance eliminated many problems could felt also creates social pressure people active reviewing others code lest karma get proposals reviewed features implemented bugs fixed corresponding redmine issues automatically updated gerrit also provides common venue developers share work progress either privately publicly testing one least favourite activities programmers would much rather continue creative solving new problems standard procedure software engineering deploy continuous integration new proposed change subjected range automatic tests gromacs project use build project wide range operating systems macos windows flavours linux compilers gnu intel microsoft clang several versions build configurations mpi openmp different kinds simd automatically test results correctness immediately finds problems programmers using posix constructs implemented windows tests detect regressions change code leads unintended change behavior unfortunately many tests still structured around executing whole process makes difficult track problem occurred unless code change tightly focused motivates discipline proposing changes one logical effect working towards adding testing new behaviors expected integrated alongside tests behavior continue build upon test infrastructure future tests required pass code changes merged testing regularly changes execution speed unsolved problem http http http particularly important monitoring exascale software developments less suited deployment via continuous integration quantity computation required test throughput code like gromacs proper range hardware input conditions would good able execute weekly test run shows unplanned performance regressions emerged prioritized yet waiting tests feature stability achieved life cycle appropriate requires extra work identifying point time git commit problem introduced work identifying correct way manage situation much better done change fresh developers minds long testing procedure reasonably automatic also gap commit testing regression may masked improvement extensive testing releases still done avoiding protracted bug hunts releases makes much happier team software requires extensive configuration built system administrator end user needs able guide kind gromacs build takes place configuration system needs verify compiler machine satisfy request requires searching ways resolve dependencies disclosing user done available important compilation fail configuration succeeded end user generally incapable diagnosing problem biochemist attempting install gromacs laptop generally know scrolling back lines output recursive make calls needed find original compilation error even generally need ask someone else problem resolve far efficient users developers detect configuration compilation fail provide suggested solutions guidance time accordingly gromacs uses cmake build system http primarily support makes extensive use constructs including scoped variables profiling experience shown hard optimize software especially hpc code based simple measurements total execution speed often necessary view performance individual parts code details execution individual compute units well communication patterns value measuring improvement execution time kernel execution time ffts dominant standard practice use tool explore tions code lines consume important quantities time focus effort however measurement provide useful information profiler perturb execution time small amount particularly challenging gromacs case iteration typically range millisecond less wall clock time around current scaling limit functions interesting profile might execute microseconds overhead introduced performance measurement acceptable kinds applications often leads incorrect conclusions gromacs statistical sampling periodically interrupting execution observe core task could work principle example intel vtune amplifier defaults interval create confidence use tool would lead accurate observations events whose duration thousand times shorter reducing profiling overhead acceptable level still capturing enough information able easily interpret performance measurements proved challenging additionally often required expert knowledge assistance developers respective performance measurement tool makes exceptionally hard use profiling part regular gromacs development workflow however optimizing dark main mdrun simulation tool included functionality many years functionality relies manual instrumentation entire source inlined timing functions well timing measurements based processor cycle counters great benefit log output every gromacs simulation contains breakdown detailed timing measurements different code parts however internal tracing functionality reach full potential collected data typically displayed analyzed across mpi ranks often hiding useful details realize potential explored possibility detailed mpi statistics including minimum maximum execution times across ranks well averages however information still less detailed classical trace profile visualizer exploring combining internal instrumentation tracing library adding api calls various tracing libraries instrumentation calls provide native support detailed gromacs linking tracing library like make considerably easier carry performance analysis without need expert knowledge collecting performance data avoiding influencing program behavior overhead http future directions gromacs grown simulation code large international software project also highly professional developer testing profiling environments match believe code quite unique extent interacts underlying hardware many significant challenges remaining provides strong base computing development however scientific software rapidly becoming dependent deep technical computing expertise many amazingly smart algorithms becoming irrelevant since implemented efficiently modern hardware inherent complexity hardware makes difficult even highly skilled physicists chemists predict work similarly realistic expect every research group afford resident computer expert likely require research groups computing centers increasingly join efforts create large open source community codes realistic fund multiple full time developers closing high performance computing landscape currently changing faster ever done formidable challenge software keep pace potential rewards exascale computing equally large acknowledgments work supported european research council swedish research center cresta project computational resources provided swedish national infrastructure computing grants snic leibniz supercomputing center references intel thread building blocks https org abraham gready optimization parameters molecular dynamics simulation using smooth ewald gromacs comput chem jul amdahl validity single processor approach achieving large scale computing capabilities proceedings april spring joint computer conference afips spring acm new york usa http anderson lorenz travesset general purpose molecular dynamics simulations fully implemented graphics processing units comput phys andoh yoshii fujimoto mizutani kojima yamada okazaki kawaguchi nagao iwahashi mizutani minami ichikawa komatsu ishizuki takeda fukushima modylas highly parallelized molecular dynamics simulation program systems forces calculated fast multipole method fmm highly scalable new parallel processing algorithms journal chemical theory computation http arnold fahrenberger holm lenz bolten dachsel halver kabadshow heber iseringhausen hofmann pippig potts sutmann comparison scalable fast methods interactions phys rev dec http bowers dror shaw overview neutral territory methods parallel evaluation pairwise particle interactions journal physics conference series http bowers dror shaw zonal methods parallel execution simulations comput phys jan http brown wang plimpton tharrington implementing molecular dynamics hybrid high performance computers short range forces comp phys comm eastman pande efficient nonbonded interactions molecular dynamics graphics processing unit comput chem eleftheriou moreira fitch germain volumetric fft pinkston prasanna eds high performance computing hipc lecture notes computer science vol springer berlin heidelberg essmann perera berkowitz darden lee pedersen smooth particle mesh ewald method chem phys faradjian elber computing time scales reaction coordinates milestoning chem phys hess kutzner van der spoel lindahl gromacs algorithms highly efficient scalable molecular simulation chem theory comput humphrey dalke schulten vmd visual molecular dynamics mol graph jagode fourier transforms communication network thesis university edinburgh edinburgh hess flexible algorithm calculating pair interactions simd architectures computer physics communications http phillips braun wang gumbart tajkhorshid villa chipot skeel kale schulten scalable molecular dynamics namd pronk larsson pouya bowman haque beauchamp hess pande kasson lindahl copernicus new paradigm parallel adaptive molecular dynamics proceedings international conference high performance computing networking storage analysis acm new york usa http pronk schulz larsson bjelkmar apostolov shirts smith kasson van der spoel hess lindahl gromacs highly parallel open source molecular simulation toolkit bioinformatics http reyes turner hess introducing shmem gromacs molecular dynamics application experience results weiland johnson eds proceedings international conference pgas programming models university edinburgh october http winkelmann hartmann optimal control molecular dynamics using markov state models math program series shirts pande screen savers world unite science http short sugita okamoto molecular dynamics method protein folding chem phys lett verlet computer experiments classical fluids thermodynamical properties molecules phys rev jul http wilson aruliah brown chue hong davis guy haddock huff mitchell plumbley waugh white wilson best practices scientific computing plos biol http yokota barba tuned scalable fast multipole method preeminent algorithm exascale systems international journal high performance computing applications http
5
feeding features enhancing performance convolutional neural networks jan sepidehsadat hosseini seoul national university sepid seok hee lee seoul nat univ seokheel abstract nam cho seoul national university nicho extract features training data without human intervention paper show feeding effective features cnn along input images enhance performance cnn least case face related tasks focus words enforcing cnn use domain knowledge increase performance save computations reducing depth specific estimation problem since important features angle depth wrinkles faces believe gabor filter responses right features problem hence propose method get benefits bif together features learned cnn input images precisely extract several gabor filter responses concatenate input image forms tensor input like image tensor input directly fed cnn like feed multichannel image cnn addition scheme let first layer cnn convolution matrix obtained first layer actually weighted sum input image gabor responses also considered fusion input image filter bank responses looks like image enhanced trextures fused image fed cnn since convolutional neural network cnn believed find right features given problem study features somewhat neglected days paper show finding appropriate feature given problem may still important enhance performance algorithms specifically show feeding appropriate feature cnn enhances performance face related works estimation face detection emotion recognition use gabor filter bank responses tasks feeding cnn along input image stack image gabor responses fed cnn tensor input fused image weighted sum image gabor responses gabor filter parameters also tuned depending given problem increasing performance extensive experiments shown proposed methods provide better performance conventional methods use input images introduction cnns gaining attention successfully applied many image processing computer vision tasks providing better performance approaches face related tasks exceptions example cnns provide better face detection performance conventional methods feature based face detector local binary pattern lbp based method deformable part model based ones case classification cnn estimators give accurate results method based features bif one best methods among noncnn approaches cnns low vision problems use image features input learn analysis feature maps convolution layers shows wrinkle features face shapes enhanced cnn conventional one uses pixel values input result accuracy estimation much improved compared cnns moreover test approach face detection emotion recognition also obtain gains existing cnn based methods tasks features apparently effective hope feeding features along image may bring better results related work influence race gender proposing network gaobr filters nobel prize winners hubel wiesel discovered simple cells primary visual cortex receptive field divided subregions layers covering whole field also petkov proposed gabor filter suitable approximation mammal visual cortex receptive field gabor filter gaussian kernel function adjusted sinusoidal wave consisting imaginary real parts real part described cos exp cos sin sin cos wavelength real part gabor filter kernel orientation normal stripes function phase offset spatial ratio standard deviation gaussian envelope representatives respectively fig example gabor filter response face image shows find textures correspond given well hence gabor filter responses used applications orientational textures play important role fingerprint recognition face detection facial expression recognition estimation text segmentation super resolution texture description face detection large number face detection methods also important topic details refer complete survey face detection done zafeiriou like computer vision problems cnns effectively used face detection facial expression recognition emotion classification relatively young complicated task among many facerelated tasks since facial expression recognition fer plays important role interaction recently researches performed subject examples conventional methods tang used support vector machine svm problem ionwscu also used svm improve bag visual words bow approach hassani used advantage facial landmarks along cnns recent studies focused using cnns fer preparation input attempt approach several face related works estimation face detection emotion recognition needs different cnn architecture fed gabor filter responses input along image seen several parameters induce different filter responses applications prepare eight filter banks combining cases four two rest parameters changed depending application age gender estimation problem set let experiments paper stated number gabor filters let fgk response gabor filter normally may concatenate input image responses tensor input cnn illustrated fig hand may consider fusing input gabor responses single input feed matrix cnn shown fig figure also shows fusing input image gabor responses interpreted convolving tensor input filter denote coefficients filter wnf multiplied input image rest multiplied gabor responses fused input represented figure demonstration gabor filter bank responses kernel size applied image responses four orientations shown estimation predicting age person single image one hardest tasks even humans sometimes difficulties reason aging depends several factors living habits races genetics etc studies without using cnn well summarized survey recent works mostly based cnn examples levi hassner work first adopt cnn estimation xing considered fgk similar weighted fusion method fig example fused input sidered image concatenation fusion approaches inject gabor responses input cnn extensive experiments fusion approach fig shows slightly better performance increase case gender estimation similarly tasks requiring slightly less number parameters stated previously cnn outperforms existing methods least aidence dataset gallagher dataset gender estimation method outperforms ones adience shown table table also shows proposed network shows almost performance vgg hybrid webface dataset ten times less number parameters vgg analysis effects feeding gabor responses compare feature maps fig specifically fig shows feature maps cnn fig cnn image input layer seen features cnn contain strong facial features wrinkle textures original network believed cause better performance networks face related problems apply gabor responses cnns estimation face detection emotion recognition problems following subsections subsection show performance improved feeding gabor responses compared case feeding image input classification table age estimation classification results adience gallagher datasets network gender estimation binary classification age estimation implemented classification regression problem case age estimation classification problem segmenting age several ranges network shown fig used convolution block consists convolution layer relu max pooling fully connected block consists fully connected layer relu drop ratio method lbp eidinger best levi resnet ptp dapp cnn cnn gallagher dataset description perform age classification two popular datasets adience gallagher dataset including pictures large variations poses appearances lighting condition unusual facial expressions etc adience approximatively images subjects classes gallagher dataset images labeled faces divided classes gender estimation used adience casia webface images subjects obtained pictures imdb pictures dataset celebrities adience table gender estimation results adience webface datasets method bif eidinger best levi resnet etvhybrid cnn cnn adience webface test result perform experiments based standard fivefold protocol fair comparison table shows results age estimation cnn means method use gabor responses tensor input cnn fused input observed cnn slightly better cnn age regression network age estimation also implemented regression problem wish tell person exact age rather figure illustration two input feeding methods tensor input directly fed cnn tensor input fused image fed cnn example fused image weighted sum image gabor responses defined maximum age set estimate true age figure comparison feature maps first convolution layer two networks features cnn original cnn image input figure age regression network architecture classification problem tells range class ages use network shown fig problem one main differences age classification regression problem need different loss functions classification problem use softmax loss defined yiyi log piyi dataset description age regression task perform experiments two widely used datasets age estimation literature choose dataset consists large amount pictures also used database contains images subjects subjects ages range number classes yiyi encoding sample age label piyi element predicted probability vector regression use mean squared error mse mean absolute error mae loss function precise mae test result used protocol webface dataset lopo test strategy working number pictures small table shows result age estimation seen network shows better performance state art method table age estimation error adience gallagher datasets cnnresent means use residual learning method bif ebif etvhybrid cnn cnnresent cnnresent casia webface dataset figure illustration three stages face detection network architecture bounding box regression face classification use loss specifically use crossentropy loss face detection network det det ldet log log face detector cascaded cnn zhang network except use fusion input gabor responses shown fig stage called possible facial windows along bonding box regression vectors obtained bounding boxes calibrated highly overlapped ones merged others using suppression nms second third stages called respectively candidates refined using calibration nms three step networks feed gabor fusion image gabor filter parameters noted finding facial components nose mouse eyes etc important relatively straight sometimes long wrinkles important previous estimation hence reduce kernel size gabor filter also parameters respectively test result probability face yidet ground truth bonding box use network output ground truth respectively table shows get better performance almost number parameters mtcnn figs show three stages using hand crafted features improve performance help increase network convergence speed evaluate face detection method compare method six methods fddb method outperform shown fig last compare method run time cnn based methods results seen purposed method better performance mtcnn cascade cnn almost fast facial expression recognition dataset description section evaluate network face detection dataset benchmark fddb contains images annotated faces taken wild two types evaluation available fddb discontinuous score counts number detected faces versus number false positives continuous score evaluates much overlap bounding boxes faces ground truth detected network baseline network fer add one drop last fully connected layer decrease overlapping shown fig fer think wrinkles play important role hence set bandwith larger previous case specifically set also becomes large set figure comparision three stages mtcnn orange method green comparison performance mtcnn cascade cnn faceness joint fasterrcnn head hunter numbers parentheses area curve table comparison validation accuracy cascadecnn mtcnn group cnn table runtime comparison gpu method faceness mtcnn cascade cnn validation accuracy speed fps fps fps fps dataset description evaluate network fer dataset labeled seven classes contains images training test test result table shows result compare results fer competition winners state art methods seen network shows better performance others vggnet also reach figure illustration network fer adding fusion module input network increase performance table results fer method radu marius cristi unsupervised maxim milakov svm vggnet accuracy fer conclusion cnns image understanding use image input belief cnn automatically find appropriate features data paper shown feeding appropriate features lead improved results hence domain knowledge study appropriate features important improving algorithms specifically shown feeding gabor filter response cnn leads better performances face related problems estimation face detection emotion recognition hope applications benefited approach image processing vision algorithms gains taking appropriate features input references bilaniuk laganire laroche moulder fast lbp face detection simd architectures proceedings ieee conference computer vision pattern recognition workshops pages deeb human age estimation using enhanced features ebif icip eidinger enbar hassner age gender estimation unfiltered faces ieee transactions information forensics security gallagher chen understanding images groups people computer vision pattern recognition cvpr ieee conference pages ieee goodfellow erhan carrier courville mirza hamner cukierski tang thaler lee zhou ramaiah feng wang athanasakis milakov park ionescu popescu grozea bergstra xie romaszko chuang bengio challenges representation learning report three machine learning contests gottschlich ridge frequency estimation curved gabor filters fingerprint image enhancement ieee transactions image processing guo huang human age estimation using features computer vision pattern recognition cvpr ieee conference ieee hassani mahoor facial expression recognition using enhanced deep convolutional neural networks corr zhang ren sun deep residual learning image recognition arxiv preprint huang shimizu kobatake robust face detection using gabor filter features pattern recognition letters hubel wiese receptive fields binocular interaction functional architecture cat visual cortex journal physiology ionescu grozea local learning improve bag visual words model facial expression recognition iqbal shoyaib ryu chae directional pattern dapp human age group recognition age estimation ieee transactions information forensics security lyons akamatsu kamachi gyoba coding facial expressions gabor wavelets automatic face gesture recognition proceedings third ieee international conference ieee jain fddb benchmark face detection unconstrained settings technical report university massachusetts amherst levi hassner age gender classification using convolutional neural network computer vision pattern recognition workshops cvprw ieee conference pages ieee levi hassner emotion recognition wild via convolutional neural networks mapped binary patterns proc acm international conference multimodal interaction icmi november viola jones robust face detection international journal computer vision lin shen brandt hua convolutional neural network cascade face detection computer vision pattern recognition cvpr ieee conference pages chellappa age invariant face verification relative craniofacial growth model european conference computer vision eccv computer vision eccv volume pages chen relative forest attribute prediction pages springer berlin heidelberg berlin heidelberg xing yuan ling diagnosing deep learning models high accuracy age estimation single image pattern recognit mathias benenson pedersoli van gool face detection without bells whistles eccv yang luo change loy chenand tang facial parts responses face detection deep learning approach ieee international conference computer vision pages ieee petkov biologically motivated computationally intensive approaches image pattern recognition future generation computer systems yang luo change loy chenand tang wider face face detection benchmark computer vision pattern recognition cvpr ieee conference ieee pramerdorfer kampel facial expression recognition using convolutional neural networks state art corr lei liao learning face representation scratch corr qin yan joint training cascaded cnn face detection computer vision pattern recognition cvpr ieee conference zhang image based static facial expression recognition multiple deep network learning november ram dogiwal shishodia upadhyaya super resolution image reconstruction using wavelet lifting schemes gabor filters confluence next generation information technology summit confluence international conference ieee zafeiriou zhang zhang survey face detection wild past present future computer vision image understandings zhang zhang qiao joint face detection alignment using cascaded convolutional networks ieee signal processing letters ranjan patel vishal chellappa deep pyramid deformable part model face detection biometrics theory applications systems btas ieee int conf pages sabari raju basa pati ramakrishnan gabor filter based block energy analysis text extraction digital document images document image analysis libraries proceedings first international workshop ieee simonyan zisserman deep convolutional networks image recognition corr szegedy vanhoucke ioffe shlens wojna rethinking inception architecture computer vision ieee conference computer vision pattern recognition cvpr june tang deep learning using support vector machines corr
1
aug toric rings inseparability rigidity mina bigdeli herzog dancheng abstract let field affine semigroup toric ring paper give explicit description components cotangent module classifies infinitesimal deformations particular interested unobstructed deformations preserve toric structure deformations call separations toric rings admit separation called inseparable apply theory edge ring finite graph coordinate ring convex polyomino may viewed edge ring special class bipartite graphs shown coordinate ring convex polyomino inseparable call bipartite graph edge ring combinatorial description graphs given results applied show rigid rigid complete bipartite graph one edge removed introduction paper study infinitesimal deformations unobstructed deformations toric rings preserve toric structure apply theory edge ideals bipartite graphs already infinitesimal homogeneous deformations toric varieties considered geometric point view view point paper algebraic exclude toric rings mind toric rings naturally appear combinatorial contexts aspect deformation theory also pursued papers deformations rings attached simplicial complexes studied let field infinitesimal deformations finitely generated parameterized elements cotangent module case domain isomorphic denotes module differentials ring called rigid refer reader theory deformation let affine semigroup affine semigroup ring interested module module naturally denotes associated group affine semigroup free group mathematics subject classification primary secondary key words phrases deformation toric ring rigid inseparable bipartite graph convex polyomino paper written third author visiting department mathematics university wants express thanks hospitality finite rank component finite dimensional space section describe vector space provide method compute dimension let generators associated group subgroup ring laurent polynomials generated monomials let polynomial ring indeterminates may viewed ring deg homomorphism thi homomorphism denote kernel homomorphism ideal called toric ideal associated generated homogeneous binomials describe binomials consider group homomorphism canonical basis kernel group homomorphism lattice called relation lattice define binomial let ideal generated binomials well known homogeneous degree let fvs system generators consider summarizing results section dimk computed follows let rank rank submatrix whose rows ith rows let rank submatrix whose columns jth columns dimk section introduce concept separation torsionfree lattice note lattice torsionfree relation lattice affine semigroup given integer say admits exists torsionfree lattice rank homomorphism identifies additional condition makes sure deformation induces element see precise definition say inseparable lattice admits call toric ring inseparable relation lattice inseparable particular generators belong hyperplane also admits natural standard grading inseparable general converse true since infinitesimal deformation given nonzero element may obstructed demonstrate theory show numerical semigroup generated three elements complete intersection complete intersection least two proof fact use structure theorem semigroups given last section devoted study edge ring bipartite graph class rings well studied combinatorial commutative algebra see given simple graph vertex set one considers edge ring toric ring generated monomials edge viewing edge ring semigroup ring edges correspond generators semigroup say inseparable corresponding semigroup inseparable main result first part section combinatorial criterion inseparable let cycle chord splits two disjoint connected components obtained restricting complement path called crossing path respect one end belongs end criterion corollary says bipartite graph inseparable cycle unique chord exists crossing path respect particular cycle chord inseparable using criterion show theorem coordinate ring convex polyomino may interpreted special class edge rings inseparable rest section consider rigidity bipartite graphs call characterize theorem bipartite graphs terms certain constellations edges cycles graph classify rigidity bipartite graphs much complicated general combinatorial criterion bipartite graph rigid instead consider graph obtained removing edge complete bipartite graph shown proposition rigid rigid remains challenging problem classify rigid bipartite graphs toric rings let affine semigroup finitely generated subsemigroup let minimal generators fix field toric ring associated ring laurent polynomials generated monomials thn let polynomial ring variables presentation thi kernel map called toric ideal attached corresponding presentation presentation extended group homomorphism denotes canonical basis let kernel group homomorphism lattice called relation lattice note free abelian group vector set basic fact see generated binomials define setting deg graded ideal deg let basis since prime ideal may localize respect prime ideal obtain sih fvr sih particular see height rank let module differentials since domain cotangent module isomorphic since follows well hence zhgraded denotes associated group smallest subgroup containing goal compute graded components module differentials presentation rdxi submodule free rdxi generated elements dfv dfv dxi stands partial derivative respect evaluated modulo one verifies dfv dxi basis element dxi rdxi given degree submodule deg dfv deg denote graded homr exact sequence gives rise exact sequence modules exact sequence may serve definition namely cokernel let fvs system generators may assume elements form basis general much larger observe elements dfvs form system generators let free graded basis deg deg dfvi define epimorphism dfvi kernel denote composition epimorphism inclusion map denoted identify image submodule consisting first describe components let denote spanned kla spanned vectors set defined theorem dimk dimk dimk kla proof let canonical basis kernel map show space assuming isomorphism proved let image canonical projection implies isomorphic orthogonal complement thus dimk dimk let cokernel obtain commutative diagram exact rows columns kla implies dimk dimk diagram shows dimk dimk dimk kla remains prove isomorphism observe kernel let basis dual since deg deg dfvi follows hence order complete proof need prove following statement let since moreover since ker follows dfvs implies dxj dxj note one use definition see therefore implies conclusion see particularly implies since either pfollows satisfying forp particular write converse assume set since since follows therefore statement proved completes proof want determine dimension observe generated elements dxi note deg dxi set let kda spanned vectors set defined proposition let dimk dimk kda proof spanned vectors dxi desired formula dimk follows shown dxi prove notice dxi thus dxi since case corollary let dimk kda dimk kla dimk equality holds summarizing discussions section observe information needed compute dimk obtained indeed dimk computed follows let rank rank submatrix whose rows ith rows let rank submatrix whose columns jth columns dimk corollary suppose proof since follows dimk dimk rank thus assertion follows corollary inequality corollary also deduced following lemma lemma fix every pair proof assume contrary say since consequently contradiction separable inseparable saturated lattices section study conditions affine semigroup ring obtained another affine semigroup ring specialization reduction modulo regular element course always choose case isomorphic polynomial ring variable obtained reduction modulo regular element trivial case consider proper solution finding specializes exists specializes called inseparable otherwise separable turns separability naturally phrased terms relation lattice let subgroup subgroup often called lattice ideal generated binomials called lattice ideal following properties known equivalent torsionfree prime ideal iii exists semigroup proof facts found example lattice torsionfree called saturated lattice let canonical basis canonical basis let denote group homomorphism convenience denote homomorphism definition let saturated lattice say exists saturated lattice rank rank iii exists minimal system generators fws vectors linearly independent lattice called called inseparable moreover lattice satisfying iii called lattice also call semigroup toric ring inseparable relation lattice inseparable remark suppose lattice let lattice ideal easily seen rank rank indeed rank height height rank contradicting definition moreover since domain particular fws minimal system generators fws minimal system generators see lemma details implies indeed divides fwj since fwj minimal generator since prime ideal polynomial fwj must irreducible possible let since fwj fvj hence fvs minimal system generators affine semigroup semigroup ring standard graded exists linear form polynomial ring minimal generators following result provides necessary condition recall affine semigroup called positive set invertible elements theorem let positive affine semigroup minimally generated relation lattice suppose particular standard graded inseparable proof since exists saturated lattice satisfying conditions given definition since follows infinitesimal deformation isomorphic let remark fwj fvj fvs minimal system generators note set residue class let canonical epimorphism let image order determine generators fix may assume modulo obtain fwj fvj xkj second equality used third equality due fact homomorphism corresponding infinitesimal deformation given xkj fvj induces element since follows implies positive see proposition assume exists since since condition iii definition vectors linearly independent obtain contradiction hence required first example separable lattice consider relation lattice numerical semigroup discussion let numerical semigroup minimally generated gcd recall facts let smallest integer nhk let rik nonnegative integers rik denote relation lattice three vectors generate rij case unique minimal system generators case two fvi minimally generate example semigroup generators example semigroup generators exist distinct integers rij case minimally generated xci xrkik xrl xckk example semigroup generators known easy prove rigid indeed since euler relations deg imply epimorphism dxi thi graded maximal ideal since rank rank follows ker torsion module thus obtain following exact sequence induces long exact sequence homr since domain homr follows words rigid course argument applied numerical semigroup generated element seen rigid next result shows relation lattice even prove need lemma let saturated lattices satisfy conditions given definition number minimal generators fws minimal system generators fws minimal system generators proof denote reduction modulo conditions definition guarantee facts follows suppose fws minimal system generators generated fws since fws minimal system generators conversely assume fws minimal system generators want show fws set fws obtain following short exact sequence natural epimorphism proposition obtain exact sequence since follows isomorphism nakayama lemma implies hence desired proposition let numerical semigroup set let relation lattice notation discussion dimk exists xci xrkik xrl rik ril case proof consider case rij see discussion fix since rij follows since positive semigroup follows corollary proposition dimk dimk hence dimk consider vectors set prove first show saturated indeed follows implies since saturated thus follows thus hence saturated next show clear since fwi fvi converse direction need note divides applying lemma conclude minimal system generators satisfying condition iii definition consequently similar arguments work case generated two vectors let lattice generated claim indeed ideal matrix whose row vectors contains elements choice follows gcd thus shows saturated since fwi fvi since lemma implies since satisfies also condition iii definition follows way shown case discussion xci xrkik xrl xckk thus complete intersection exponents without loss generality may assume since lattice basis saturated follows ideal equal consider lattice whose basis consists row vectors ideal matrix contains hence equal thus saturated furthermore implies since rank rank conditions definition satisfied applying lemma obtain since condition iii definition also satisfied see similarly one shows edge rings bipartite graphs let finite simple graph vertex set let field kalgebra called edge ring denotes set edges let denote polynomial ring indeterminates let homomorphism toric ideal ker denoted section discuss inseparability rigidity edge ring bipartite graph may well considered toric ring associated affine semigroup generated elements canonical basis let bipartite graph generators given terms even cycles recall walk sequence edge called closed walk closed walk called cycle called even closed walk even observe cycle bipartite graph even cycle given even cycle generally even closed walk edges ekj together edge associate vector defined denotes canonical basis note determined sign call well vector corresponding simplicity write recall toric ideal finite bipartite graph minimally generated indispensable binomials binomials sign belong system generators furthermore binomial indispensable induced cycle cycle without chord particular graph obtained deleting edges belong cycle therefore may assume throughout section edge belongs cycle rest section let bipartite graph vertex set edge set edge associate vector canonical basis semigroup generated denote simply note let set cycles vector corresponding may assume cycles induced cycles minimally generated see course also generated fvs particular relation lattice vector space spanned let section set kla spank addition also set spank general proper subset however lemma kla proof since kla let cycle chords following describe process obtain induced cycles vertex set contained choose chord note chord divides two cycles cycles induced process stops otherwise divide cycles induced proceeding way obtain induced cycles denoted cik cij consists least one chord moreover edges cij chords edges general cycle hence follows construction induced cycles cij vij sum certain terms edge hence vij since follows vij implies vij kla kla since linear combination vij discussion separability need know vanishes see theorem need interpretation edge rings given following formula proof equation note without loss generality assume note even since contains odd cycle follows definition conversely assume let vector kth entry negative thus belong therefore later also shall need lemma let even closed walk let edge property vector belongs proof may view bipartite graph bipartition see belongs space spanned vectors corresponding induced cycles edges vector space subspace since edge cycle edges follows call space spanned vectors cycle space respect usually cycle space defined bipartite graphs dimension cycle space depend known number connected components see corollary inseparability subsection present characterizations bipartite graphs inseparable note says edge chord accordingly split set two subsets edge chord also set spank since assumption edges belong cycle obtain dimk dimk indeed let graph obtained deleting edge leaving vertices unchanged cycle space lemma one proof since follows dim see proposition thus since follows since spank assertion follows stating next result first introduce concepts let cycle path called path chord vertices called ends note chord path chord let path chord may assume together edges let another path chord say cross one end belongs interval end belongs particular chord crosses say crossing path chord respect chord theorem let bipartite graph edge set let edge ring following conditions equivalent exists cycle chord crossing path chord respect relation lattice proof assume hold let defined assumption admits path chord denoted crosses denote two ends union two paths ends since cycles neither edge chord follows lemma vectors belong therefore linear combination applying lemma obtain contradiction may assume cycle given edge set let set path vertex set define graph disjoint graphs graph subgraph induced next subgraph induced first define renaming claim obtained disjoint indeed condition implies claim identify vertex vertex obtain indeed let graph obtained identification show obviously let definition impossible since definition follows impossible reason thus proved claim edge ring form variable corresponds edge variable corresponds edge mapped identification map let relation lattice relation lattice saturated lattices claim satisfy conditions iii respect see definition first show let minimal generator exists induced cycle since follows hence induced cycle whose image identification map therefore proves condition since satisfied follows moreover since domain implies height height particular rank rank thus condition also satisfied finally definition exist induced cycle edge say induced cycle edge say let implies condition iii implication follows theorem corollary let bipartite graph inseparable cycle unique chord exists crossing path chord respect particular cycle chord inseparable proof theorem inseparable cycle chord exists crossing path chord respect assume first inseparable said cycle chord desired property conversely assume cycle unique chord crossing path chord let cycle chord suppose another chord say crosses done otherwise divides two smaller cycles may assume chord since less chords may apply induction number chords cycle deduce exists crossing path chord respect path chord also crossing path chord respect example theory developed far consider coordinate rings convex polyominoes first recall definitions facts convex polyominoes let consider partially ordered set let set called interval cell interval form elements called vertices denote set vertices intervals called edges set edges denoted let finite collection cells two cells called connected exists sequence cells cells intersect edge cells pairwise distinct called path finite collection cells called polyomino every two cells connected vertex set denoted defined edge set denoted defined polyomino said vertically column convex intersection vertical line convex similarly polyomino said horizontally row convex intersection horizontal line convex polyomino said convex row column convex figure shows two polyominos whose cells marked gray color right hand side polyomino convex left one figure let polyomino let field denote polynomial variables xij xij xkl xil xkj called inner minor cells belong ideal generated inner minors called polyomino ideal also set shown domain hence toric ring convex toric parametrization given following proof theorem let convex polyomino inseparable proof set associate bipartite graph figure shows polyomino associated bipartite graph figure let subring polynomial ring generated monomials words edge ring bipartite graph let xij shown kernel homomorphism xij thus desired toric parametrization known generated binomials corresponding cycles using corollary enough show cycle unique chord say crossing path chord respect since bipartite graph even cycle also chord since every induced cycle since one chord must assume vertices listed counterclockwise chord notation introduced follows vertices consider following cases suppose first without loss generality may assume since convex vertices vertex follows edge chord contradicting assumption unique chord similarly case also possible remains consider case without loss generality may assume either vertex connectedness convexity may assume note belong thus obtain path crossing path chord respect say subsection consider weak form rigidity however stronger inseparability let finite bipartite graph vertex set edge set enp edge ring toric ring whose generators elements canonical basis may assume edge belongs cycle set cycles set induced cycles let one cycles edges labeled counterclockwise two distinct edges said parity eij eik even number lemma let let parity moreover kla proof since cycle induced cycle let edges labeled counterclockwise thus parity follows belong either one summands shows conversely suppose let simplicity may assume correspond vertices edges correspond elements general let follows hence follows edges vertices pairwise different sum suppose edges parity sum consists odd sum hence none summands belongs since exists summand summand implies chord however possible indeed mod case next show kla note kla lemma order obtain desired equality need show since exists induced cycle say let parity may assume since parity assume without loss generality even closed walks let since vertex belongs vertex lemma implies similarly follows since differs sign either follows required lemma suppose proof let since edge however vectors belong property hence implies corollary assume inseparable let dimk kla dimk otherwise dimk kla dimk proof since assume inseparable follows corollary proposition dimk dimk dimk since assumption edge belongs cycle follows dimk thus dimk dimk similarly dimk dimk moreover together lemma dimk kla dimk desired otherwise two cases consider kla definition kla lemma kla using lemma together lemma theorem let bipartite graph inseparable following statements equivalent exist edges induced cycle parity induced cycle proof let vectors corresponding edges respectively dimk kla dimk corollary note therefore corollary particular semirigid assumption exists note otherwise kla particular contradiction since inseparable follows therefore corollary corollary let edges corresponding vectors respectively since exists induced cycle implies parity lemma moreover induced cycle suppose exists set note implies klb kla therefore since previous case corollary let convex polyomino contains one cell proof assume contains unique cell square theorem conversely assume exist two edges induced cycle satisfying condition theorem let vertices corresponding edge respectively two edges correspond vertices follows without loss generality may assume let induced cycle corresponding cell since contains edge must contain condition thus claim cell suppose case let four cells share common edge cell note contains least one indeed since connected since assumption contains cell different exists path cell path must contain one however contains exactly one two vertices words exists induced cycle contains exactly one edges contradicted condition thus claim proved classes bipartite graphs rigid let bipartite graph vertex set edge set odd thus obtained complete bipartite graph deleting one edges observe polyomino edge use denote vector let cycle stands vector corresponding denotes set vertices main result subsection following proposition let edge ring semirigid rigid rigid need preparations first introduce notation let set even odd also set lemma let following conditions equivalent proof first note even odd indeed suppose prove induction assertion trivial suppose consider case since case similar case exist even odd let induction implies conversely obvious note since given integer follows required use induction see induction proof assume without restriction may assume otherwise contradiction inequality used hence exists even number set induction corollary let either proof suppose lemma note lemma hence even number say set suppose contradiction therefore similarly since lemma implies moreover choices lemma let contains cycle dimk dimk particular proof let graph let graph obtained adding vertices belong graph cycle tree spanning tree tree choose connected component let induced graph set since connected exists edge one end end let graph obtained adding edge since neither contains cycle follows also cycle one edge proceeding way obtain finitely many steps spanning tree contains particular exists subset say since spanning trees number edges namely follows contains unique induced cycle say vector corresponding cycle follows dimk since number induced cycles hand dimk dimk dimk equal follows dimk dimk completing proof proposition proof proposition since associated polyomino containing vertices corollary let kla spanned vectors corresponding cycles implies dimk kla since dimk dimk particular rigid required let want prove following cases consider case corollary either see corollary edge lemma follows contains cycle lemma induced cycle follows lemma fact induced cycle prove kla show kla given induced cycle even odd obtain two cycles note kla linear combination kla induced cycle one lemma kla particular case exists unique kla particular hence assume symmetry need consider cases first assume since exists odd integer corollary either second case happen hence words denote let cycle let implies dimk compute dimk kla notice induced cycle thus kla contains cycle space complete bipartite graph bipartition dimension see thus dimk next assume odd moreover corollary either suppose first cycle implies thus similarly case see kla suppose next denote let cycle let vector corresponding implies dimk hand kla contains cycle space subgraph induced dimension thus finally suppose also check deduce process last case induced cycle claim kla given induced cycle even odd let belong kla linear combination thus kla claimed particular case without restriction may assume indeed lemma assume first even induced cycle kla let cycle choose even number obtain two cycles since linear combination since kla kla thus kla particular next assume even odd notice write moreover dimk kla contains cycle space graph obtained deleting edge hence dimk kla dimk induced cycle let induced cycle vectors correspond cycles respectively belong kla linear combination follows kla first note indeed either contradiction thus lemma follows kla particular finally assume follows kla case may assume need consider case may assume even odd let induced cycle choose even number let two cycles belong kla linear combination implies kla particular case may assume induced cycle implies may assume even numbers let odd number let since linear combination kla consequently thus shown shows rigid desired references altmann computation vector space affine toric varieties pure appl algebra altmann minkowski sums homogeneous deformations toric varieties math altmann bigdeli herzog algebraically rigid simplicial complexes graphs altmann christophersen deforming schemes math ann altmann christophersen cotangent cohomology rings manuscripta math bruns herzog rings cambridges studies advanced mathematics ene herzog bases commutative algebra graduate studies mathematics ams herzog generators relations abelian semigroups semigroup rings manuscripta math qureshi ideals generated collections cells stack polyominoes algebra ohsugi hibi koszul bipartite graphs advances applied mathematics ohsugi hibi toric ideals generated quadratic binomials algebra stevens deformations singularities lecture notes mathematics berlin springer villarreal monomial algebras pure applied mathematics marcel dekker mina bigdeli faculty mathematics institute advanced studies basic sciences iasbs zanjan iran address herzog mathematik essen germany address dancheng department mathematics soochow university suzhou address ludancheng
0
new approach probabilistic programming inference jul frank wood department engineering university oxford jan willem van meent vikash mansinghka department statistics computer science lab columbia university massachusetts institute technology abstract introduce demonstrate new approach inference expressive probabilistic programming languages based particle markov chain monte carlo approach simple implement easy parallelize applies probabilistic programming languages supports accurate inference models make use complex control flow including stochastic recursion also includes primitives bayesian nonparametric statistics experiments show approach efficient previously introduced methods introduction probabilistic programming differs substantially traditional programming particular probabilistic programs written parts fixed advance instead take values generated runtime random sampling procedures inference probabilistic programming characterizes conditional distribution variables given observed data assumed generated executing probabilistic program exploring joint distribution program execution traces could generated observed data using markov chain sampling techniques one way produce characterization propose novel combination ideas probabilistic programming particle markov chain monte carlo pmcmc yields new scheme exploring characterizing set probable execution traces approach based repeated simulation probabilistic program easy implement parallelize show approach supports accurate inference models make use complex control flow including stochastic recursion well primitives nonparametric bayesian statistics experiments also show approach efficient previously introduced samplers language probabilistic programming system exists two versions results paper obtained using called interpreted employed interpreted execution model language syntax derived modeling language since time original publication syntax execution model changed venture style syntax deprecated new version using syntax much closer anglican host language clojure new anglican simply called anglican compiled language style cps transformation compiles anglican programs clojure programs subsequently compiled java virtual machine jvm bytecode clojure compiler change language execution model syntax affect neither substance claims paper however readers wishing experiment language starting code examples appearing section would well note evolution change absolute time required perform inference given models general latest compiled version anglican ten one hundred times faster interpreted ancestor paper originally appeared proceedings international conference artificial intelligence statistics aistats updated reflect changes latest version anglican language http http http http new approach probabilistic programming inference original syntax original deprecated anglican syntax extended three toplevel special forms refer directives assume symbol expr observe expr const predict expr expr expression symbol unique symbol const constantvalued deterministic expression semantically assume random variable generative declarations observe condition distribution assume variables noisily constraining output values random functions assume variables match observed data predict watches report via print values variables program traces explored anglican probabilistic program interpretation taken exploration space execution traces obey hard reflect soft observe constraints order report functions via predict conditional distribution subsets assume variables like anglican eagerly exchangeably recursively evaluates subexpressions instance expr proc arg arg applying procedure may random procedure special form resulting evaluating first expression proc value arguments anglican counts applications reported computational cost proxy sec anglican supports several special forms notably lambda arg arg body allows creation new procedures pred cons alt supplies branching control flow also begin let define quote cond anglican exposes eval apply deterministic procedures include list car cdr cons mem arithmetic procedures etc randomness language originates builtin random primitives two types elementary random primitives poisson gamma flip discrete categorical normal generate independent identically distributed samples called repeatedly arguments exchangeable random primitives crp return random procedure internal state generates exchangeably distributed samples called repeatedly interpreted version anglican outer proc observe expr must random primitive guarantees likelihood output given arguments computed exactly new syntax compiled version anglican supports syntax closely integrates host language clojure macro defquery defines anglican program within clojure defquery symbol body symbol name program arguments may used pass values parameters observed variables program set allowable body expressions body subset clojure language basic clojure language forms first order primitives supported macros functions inherited clojure although subset implemented cps version anglican random primitives normal discrete return first class distribution objects rather sampled values furthermore user programmable without requiring modifications anglican compiler special forms sample observe associate values random variables drawn distribution sample dist observe dist value language additionally provides data type sequences random variables refer random process random process implements two operations produce process absorb process value produce primitive returns distribution object next random variable sequence absorb primitive returns updated random process instance value associated next random variable sequence random process commonly used represent exchangeably distributed sequence random variables may used represent sequence random variables possible construct distribution next variable given preceding variables random process constructors customarily identified uppercase names crp manner user programmable finally predict form may used point program generate labeled output values predict label expr cond map reduce filter repeatedly comp partial frank wood jan willem van meent vikash mansinghka given previously defined query inference may performed using doquery macro doquery algorithm symbol inference algorithm may specified using algorithm keyword options inference algorithm supplied arguments well doquery macro constructs lazy sequence program execution states contain predicted values optionally importance weight may consumed analyzed instance outer clojure java program inference execution trace sequence memory states resulting sequence function applications performed interpretation program probabilistic programming systems like anglican variable may declared output random procedure variables take different values independent interpretations program leads computational trace tree interpretation time branch every random procedure application define probability single execution trace first fix ordering exchangeable lines program index observe lines let likelihood observe output random procedure type gamma poisson etc argument possibly multidimensional set random procedure application results computed likelihood observation evaluated type parameter functions subset define probability execution trace set observe quantities set random procedure application results marks distributions sample number type random procedure applications performed nth observe may vary one program trace next define probability sequence outputs empty set kth values generated random procedure applications trace observation likelihood computation cardinality set notated arises implicitly total number random procedure applications given execution trace arguments type random procedure may functions subsets subsets variables note also note variable referencing defines directed conditional dependency structure probability model encoded program need often due variable scoping depend outputs previous random procedure applications use sampling explore characterize distribution distribution random procedure outputs lead different program execution traces conditioned observed data related approaches include rejection sampling singlesite members set directed probabilistic models joint distributions expressed probabilistic programs unroll possible execution traces equivalent joint distribution probabilistic programming frameworks anglican included support recursive procedures branching values returned random procedures corresponding set models superset set directed graphical models related efforts eschew operate restricted set models inference techniques sampling readily employed new approach towards new approach probabilistic programming inference first consider standard sequential monte carlo smc recursion sampling sequence intermediate distributions terminates joint given note sequence intermediate approximating distributions constructed syntactically allowed reordering assume observation likelihoods pushed far left sequence approximating distributions possible however clear proceed case assume unweighted samples produce approximate samples new approach probabilistic programming inference via importance sampling may choose proposal distribution sampling weighting discrepancy distribution interest rive samples unnormalized weights hats notate difference weighted unweighted samples weighted vice versa expression simplifies substantially propose prior case proposal distribution defined continued interpretation program observation likelihood evaluation case weight simplifies sampling unweighted particle set completely describes smc probabilistic program inference smc procedure described first approximation inner loop pmcmc corresponds procedure whereby probabilistic program interpreted parallel possibly particle thread process observation likelihood calculations unfortunately smc finite set particles directly viable probabilistic programming inference familiar reasons particle degeneracy inefficiency models global continuous parameters etc pmcmc hand directly viable pmcmc probabilistic programming inference algorithm exploring space execution traces uses smc proposals internally unlike prior art allows sampling execution traces changes potentially many one variable time particular variant pmcmc discuss paper although developed engines based pmcmc variants including particle independent metropolis hastings conditional sequential monte carlo works iteratively smc first sweep reinsertion retained particle trace set particles every stage smc theoretically justified transition operator like gibbs operator always accepts paper describe pmcmc probabilistic programming inference algorithmically experimentally demonstrate relative efficacy probabilistic programming inference alg function stands multinomial sample items set pairs element consists unnormalized weight interpreter memory states sample value returned function also returns original corresponding unnormalized weight kind weight bookkeeping retains particles results outermost observe likelihood function applications unnormalized weights available retained particle next sweep algorithm pmcmc prob prog inference number particles number sweeps run smc initialize interpreters ordered lines program fork end directive assume interpret end else directive predict interpret end interpret else directive observe interpret end end end end alg run smc means running one sweep loop particles retained particle program line fork means copy entire interpreter memory datastructure efficient implementations characteristics similar posix fork command interpret means execute line given interpreter interpreting observe must interpreter return weight result outermost apply observe statement bars indicate temporary data structures averages sets ordered unions implemented append operations note efficient pmcmc algorithms probabilistic programming inference particular reason fork unless observe frank wood jan willem van meent vikash mansinghka interpreted alg presented form expositional purposes proposal density expressed similar fashion terms allowing full acceptance probability written random database refer approach sampling space traces proposed random database rdb rdb sampler sampler single variable drawn course particular interpretation probabilistic program modified via standard proposal modification accepted comparing value joint distribution old new program traces completeness review rdb noting subtle correction acceptance ratio proposed original reference proper larger family models rdb sampler employs data structure holds random variables associated execution trace along parameters log probability draw note interpretation program deterministic conditioned new proposal trace initialized picking single variable random draws resampling value using reversible kernel starting initialization program rerun generate new set variables correspond new valid execution trace instance random procedure type remains reuse existing value set rescoring log probability conditioned preceding variables necessary random procedure type changed new random variable encountered value sampled usual manner finally compute probability rescoring observe needed accept probability min order calculate ratio proposal probabilities need account variables resampled course constructing proposal well fact sets may different cardinalities use slightly imprecise notation refer set variables resampled let represent set variables common execution traces proposal probability given implementation initialization simply resampled conditioned preceding variables reverse testing programming probabilistic program interpreters software development effort involving correct implementation interpreter correct implementation general purpose sampler methodology employ ensure correctness involves three levels testing unit tests measure tests conditional measure tests unit measure tests context probabilistic programming unit testing includes verifying interpreter correctly interprets comprehensive set small deterministic programs measure testing involves interpreting short revealing programs consisting assume predict statements producing sequence ancestral unconditioned samples observe interpreter output tested relative ground truth ground truth computed via exhaustive enumeration analytic derivation combination always different independent computational system like matlab various comparisons empirical distribution constructed accumulating stream output predicts ground truth computed divergences discrete sample spaces kolmogorov smirnov test statistics continuous sample spaces possible construct distribution equality hypothesis tests combinations test statistic program generally content accept interpreters clear evidence convergence towards zero test statistics measure tests anglican passed unit measure tests conditional measure tests measure tests involving conditioning provide additional information beyond provided measure unit tests conditioning involves endowing programs observe statements constrain weight set possible execution traces interpreting observe statements engages full inference machinery conditional measure test performance measured way measure test performance also compare different probabilistic programming inference engines new approach probabilistic programming inference simulation randomdb pmcmc randomdb simulation pmcmc randomdb apply simulation pmcmc randomdb apply marginal randomdb pmcmc pmcmc randomdb pmcmc randomdb time simulation pmcmc randomdb time posterior randomdb pmcmc pmcmc randomdb randomdb enumerated pmcmc randomdb number components pmcmc cdf prior pmcmc randomdb marsaglia posterior branching time apply pmcmc random time randomdb pmcmc pmcmc pmcmc randomdb mixture frequency hmm apply figure comparative conditional measure test performance pmcmc particles rdb inference engine comparison compare pmcmc rdb measuring convergence rates illustrative set conditional measure test programs results four tests shown figure program interpreted using inference engines pmcmc found converge faster conditional measure test programs respond expressive probabilistic graphical models rich conditional dependencies four test programs program corresponds state estimation hidden markov model hmm continuous observations hmm program figure program corresponds learning uncollapsed dirichlet process frank wood jan willem van meent vikash mansinghka ture gaussians fixed hyperparameters mixture program figure multimodal branching deterministic recursion program represented graphical model possible execution paths enumerated branching program figure program corresponds inferring mean univariate normal generated via marsaglia algorithm halts probability one generates unknown number internal random variables marsaglia program figure refer expressive models complex conditional dependency structures simple models programs encode models free parameters illustrate claims included document correctness completeness anglican implementation also demonstrating gains illustrated come great cost even simple programs priori pmcmc might reasonably expected underperform figures three panels report similar style findings across test programs fourth specific individual test program pmcmc results reported interpreter particles choice particles largely arbitrary results stable large range values pmcmc dark blue rdb light orange report lower dashed median solid upper dashed percentiles runs differing random number seeds refer distances compute via running average empirical distribution predict statement outputs ground truth starting first predict output note lower better define number simulations number times program interpreted entirety rdb means number simulations exactly number sampler sweeps pmcmc number particles multiplied number sampler sweeps time horizontal axes report wall clock time apply axes report number function applications performed interpreter distance time plots observed pmcmc times reported via filled circles dotted lines illustrate hypothetically achievable via parallelism carefully note pmcmc requires completing number simulations equal number particles emitting batched predict outputs means implementations pmcmc suffer latency rdb still programs quality pmcmc predict outputs pmcmc convergence rate faster even direct wall clock time comparison rdb pmcmc appears converge faster programs rdb even relative number function applications equivalent results obtained relative eval counts hmm new defquery hmm observations predict states reduce states obs let state sample get peek states observe get state obs conj states state sample observations original deprecated assume list assume lambda cond list list list assume transition lambda discrete assume mem lambda index index discrete transition index assume lambda cond observe normal observe normal observe normal predict predict predict hmm program corresponds latent state inference problem hmm three states onedimensional gaussian observations known means variances transition matrix initial state distribution lines program organized observe time sequence axis reports sum divergences running sample average state occupancy across states hmm including initial state one trailing predictive state dkl returns one simulation latent state time step equal true marginal probability latent state indicator taking value time step vertical red line apply plot indicates time number applies takes run forwardbackward anglican interpreter fourth plot shows learned posterior distribution latent state value time steps including initial state trailing predictive time step rdb produces reasonable approximation true posterior slowly greater residual error new approach probabilistic programming inference mixture new defquery observations alpha beta let gamma loop observations observations crp alpha states empty observations predict states states predict count let state sample produce get state let sample sqrt beta sample normal normal sqrt observe first observations recur rest observations absorb state assoc state conj states state original deprecated assume crp assume class mem lambda assume var mem lambda gamma assume mean mem lambda normal var assume lambda list class class class class assume lambda count unique assume means lambda list mean cons mean means assume stds lambda list sqrt var cons var stds observe normal mean class var class observe normal mean class var class observe normal mean class var class predict predict predict means predict stds mixture program corresponds clustering unknown mean variance problem modelled via dirichlet process mixture gaussians unknown mean variance normalgamma priors divergence reported running sample estimate distribution number clusters data ground truth distribution ground truth distribution number clusters computed model data exhaustively enumerating partitions data analytically computing evidence terms exploiting conjugacy conditioning partition cardinality fourth plot shows posterior distribution number classes data computed methods relative ground truth program written way intentionally antagonistic pmcmc continuous likelihood parameters marginalized observe statements organized optimal ordering despite pmcmc outperforms rdb per simulation wall clock time apply count branching new defn fib loop recur inc fib defquery branching let poisson sample fib sample observe poisson predict original deprecated assume fib lambda cond else fib fib assume poisson assume fib poisson observe poisson predict branching program corresponding graphical model designed test correctness inference programs control logic execution paths vary number sampled values also illustrates mixing model shown fourth plot large mismatch prior posterior rejection importance sampling likely ineffective one observation single named random variable pmcmc rdb achieve essentially indistinguishable performance normalized simulation time apply count marsaglia new defm std let loop sample sample let std sqrt log recur sample sample defquery observations sigma let likelihood normal sigma reduce obs observe likelihood obs nil observations predict original deprecated assume lambda std define define define std sqrt log std assume std sqrt assume sqrt observe normal std observe normal std predict frank wood jan willem van meent vikash mansinghka marsaglia test program included completeness example type program pmcmc sometimes may efficient marsaglia name given rejection form algorithm sampling gaussian marsaglia test program corresponds inference problem observed quantities drawn gaussian unknown mean unknown mean generated anglican implementation marsaglia algorithm sampling gaussian axis test statistic computed finding maximum deviation accumulating sample analytically derived ground truth cumulative distribution functions cdf pmcmc rdb ground truth cdfs shown fourth plot marsaglia recursive rejection sampler may require many recursive calls conjecture rdb may faster pmcmc pmcmc pays statistical cost pay computational cost exploring program traces include many random procedure calls lead rejections whereas rdb due implicit geometric prior program trace length effectively avoids paying excess computational costs deriving unnecessarily long traces simulation hmm simulation mixture figure effect program line permutations simulation hmm line permutation syntactically semantically observe predict mutually exchangeable assume syntactic constraints given nature pmcmc reasonable expect line permutations could effect efficiency inference explored randomly permuting lines hmm mixture programs results shown fig blue lines correspond median twenty five runs pmcmc twenty five program line permutations including unmodified dark reversed light orange rdb hmm found natural ordering lines program resulted best performance pmcmc relative rdb ordering observe cause happen soon possible smc phase pmcmc effect permuting code lines interpolates inference performance optimal pmcmc best adversarial orderings rdb instead rdb performance demonstrated independent program line ordering mixture results show pmcmc outperforming rdb program reorderings seen original program ordering optimal respect pmcmc inference pmcmc presents opportunity significant gains inference efficiency prevent programmers seeking optimize performance manually programmers influence inference performance reordering program lines particular pushing observe statements near front program syntactically allowed restructuring programs lazily rather eagerly generate latent variables efficiency gains via automatic transformations online adaptation ordering may possible number particles fig shows number particles pmcmc inference engine affects performance performance improves function number particles plot red line indicates particles increasing dark light number particles plotted red simulation mixture figure effect particle count performance discussion pmcmc approach probabilistic program interpretation appears converge faster true conditiona distribution program execution traces new approach probabilistic programming inference programs correspond expressive models dense conditional dependencies methods pmcmc converges faster tests even normalizing computational time surprising striking reiterate includes computation done particle executions current versions anglican include inference core work remains achieve optimal parallelism due particular memory organization leading excessive locking overhead using single thread pmcmc surprisingly sometimes outperforms rdb per simulation anyway terms wall clock time specific choice syntactically forcing observe noisy requires language interpreter level restrictions checks hard constraint observe supported anglican exposing dirac likelihood programmers persisting help programmers avoid writing probabilistic programs finding even single satisfying execution trace also could perceived requiring programming style explored concurrent work venture may opportunities improve inference performance partitioning program variables either automatically via syntax constructs treating differently inference instance could sampled via plain via conditional smc one simple way would combine conditional smc rdb acknowledgments thank xerox google generous support also thank arnaud doucet brooks paige yura perov helpful discussions pmcmc probabilistic programming general references christophe andrieu arnaud doucet roman holenstein particle markov chain monte carlo methods journal royal statistical society series statistical methodology george box mervin muller note generation random normal deviates annals mathematical statistics noah goodman vikash mansinghka daniel roy keith bonawitz daniel tarlow church language generative models arxiv preprint roman holenstein particle markov chain monte carlo phd thesis university british columbia hubert lilliefors test normality mean variance unknown journal american statistical association george marsaglia thomas bray convenient method generating normal variables siam review tom minka winn guiver knowles microsoft research cambridge open group ieee std edition url http avi pfeffer ibal probabilistic rational programming language ijcai pages citeseer david spiegelhalter andrew thomas nicky best wally gilks bugs bayesian inference using gibbs sampling manual version mrc biostatistics unit institute public health cambridge stan development team stan modeling language user guide reference manual http david wingate andreas stuhlmueller noah goodman lightweight implementations probabilistic programming languages via transformational compilation proceedings international conference artificial intelligence statistics page
6
clarity efficiency distributed algorithms yanhong liu scott stoller lin mar computer science department stony brook university stony brook usa liu stoller bolin abstract introduction article describes language clear description distributed algorithms optimizations necessary generating efficient implementations language supports control flows complex synchronization conditions expressed using queries especially logic quantifications message history sequences unfortunately programs would extremely inefficient including consuming unbounded memory executed straightforwardly present new optimizations automatically transform complex synchronization conditions incremental updates necessary auxiliary values messages sent received core optimizations first general method efficient implementation logic quantifications developed operational semantics language implemented prototype compiler optimizations successfully used language implementation variety important distributed algorithms distributed algorithms core distributed systems yet developing practical implementations distributed algorithms correctness efficiency assurances remains challenging recurring task study distributed algorithms relied either docode english imprecise formal specification languages precise harder understand lacking mechanisms building real distributed systems executable time programming distributed systems mainly concerned program efficiency relied mostly use complex libraries lesser extent mechanisms restricted programming models lacking simple powerful language express distributed algorithms high level yet clear semantics precise execution well verification fully integrated widely used programming languages building real distributed systems together powerful optimizations transform algorithm descriptions efficient implementations article describes language distalgo clear description distributed algorithms combining advantages pseudocode formal specification languages programming languages categories subject descriptors programming techniques concurrent programming programming languages language languages programming languages generation compilers optimization logics meanings programs specifying verifying reasoning techniques logics meanings programs semantics programming semantics computing methodologies knowledge representation formalisms logic general terms mance main control flow process including sending messages waiting conditions received messages stated directly sequential programs yield points message handlers execute specified explicitly declaratively algorithms design languages complex synchronization conditions expressed using queries especially quantifications message history sequences without manually writing message handlers perform incremental updates obscure control flows keywords distributed algorithms queries updates incrementalization logic quantifications message histories synchronization conditions yield points distalgo supports features building objectoriented programming language also developed operational semantics language result distributed algorithms expressed distalgo clearly high level like pseudocode also precisely like work supported part nsf grants onr grants formal specification languages facilitating formal verification executed part real applications programming languages unfortunately programs containing control flows synchronization conditions expressed high level extremely inefficient executed straightforwardly quantifier introduce linear factor running time use history messages sent received may cause space usage unbounded present new optimizations allow efficient implementations generated automatically extending previous optimizations distributed programs challenging quantifications implemented prototype compiler optimizations experimented variety important distributed algorithms including paxos byzantine paxos experiments strongly confirm benefits language effectiveness optimizations article revised version liu main changes revised extended descriptions language optimization method new formal operational semantics abridged updated description implementation new description experience using distalgo teaching expressing distributed algorithms method transforms sending receiving sages updates message history sequences incrementally maintains truth values synchronization conditions necessary auxiliary values sequences updated finally removes sequences dead code appropriate even distributed algorithm appears simple high level subtle necessary details considered making difficult understand algorithm works precisely difficulty comes fact multiple processes must coordinate synchronize achieve global goals time delays failures attacks occur even determining ordering events nontrivial lamport logical clock fundamental distributed systems incrementally maintain truth values general quantifications method first transforms aggregations also called aggregate queries general however translating nested quantifications simply nested aggregations incur asymptotically space time overhead necessary transformations minimize nesting resulting queries running example use lamport distributed mutual exclusion algorithm running example lamport developed illustrate logical clock invented problem processes access shared resource need access mutually exclusively called critical section one process critical section time processes shared memory must communicate sending receiving messages lamport algorithm assumes communication channels reliable fifo figure contains lamport original description algorithm except notation instead rule comparing pairs timestamps process ids using lexical ordering iff word acknowledgment added rule simplicity omitting commonly omitted small optimization mentioned footnote description authoritative high level uses precise english found algorithm satisfies safety liveness fairness message complexity safe one process critical section time live process critical section requests fair requests served order logical timestamps request messages message complexity messages required serve request quantified order comparisons used extensively nontrivial distributed algorithms incrementalized easily mixed conditions systematically extract single quantified order comparisons transform efficient incremental operations overall method significantly improves time complexities reduces unbounded space used message history sequences auxiliary space needed incremental computation systematic incrementalization also allows time space complexity generated programs analyzed easily significant amount related research discussed section work contains three main contributions simple powerful language expressing tributed algorithms control flows synchronization conditions operational semantics full integration language systematic method incrementalizing complex chronization conditions respect sending receiving messages distributed programs general systematic method generating challenges understand algorithm carried precisely one must understand processes acts interactions cient implementations arbitrary logic quantifications together general queries figure simplified algorithm expressed using basically two send statements receive definition await statement results running example shown figures details explained later figure shows lamport original algorithm expressed distalgo also includes configuration setup running processes trying enter critical section point execution figures show two alternative optimized programs incrementalization lines comments new except await statement simplified figure shows simplified algorithm algorithm defined following five rules convenience actions defined rule assumed form single event request resource process sends message requests resource every process puts message request queue timestamp message process receives message requests resource places request queue sends timestamped acknowledgment message release resource process removes requests resource message request queue sends timestamped releases resource message every process process receives releases resource message removes requests resource message request queue process granted resource following two conditions satisfied requests resource message request queue ordered request queue relation define relation messages identify message event sending received acknowledgment message every process timestamped later note conditions rule tested locally distalgo language support distributed programming high level four main concepts added commonly used programming languages especially languages python java distributed processes sending messages control flows yield points waits receiving messages synchronization conditions using queries message history sequences configuration processes communication mechanisms distalgo supports concepts options generalizations ease programming described formal operational semantics distalgo presented appendix figure original description english processes sending messages distributed processes concurrent executions programmed instructions like threads java python except process private memory shared processes processes communicate message passing three main constructs used defining processes creating processes sending messages process definition form defines type processes defining class extends class process set method definitions handler definitions described cesses process must order handling events according five rules trying reach goal entering exiting critical section also responding messages processes must also keep testing complex condition rule events happen state machine based formal specifications used fill details precisely time harder understand example formal specification lamport algorithm automata pages occupies one fifth pages actually implement distributed algorithms details many additional aspects must added example creating processes letting establish communication channels incorporating appropriate logical clocks lamport clock vector clock needed guaranteeing specified channel properties reliable fifo integrating algorithm application specifying critical section tasks invoking code algorithm part overall application furthermore easy modular fashion class extends process special method setup may defined initially setting data process process execution starts special method run may defined carrying main flow execution special variable self refers process process creation statement form creates new processes type node value expression returns resulting process set processes node running distalgo program machine identified host name machine plus name running distalgo program specified starting program approach address challenges distalgo language compilation executable programs especially optimization incrementalization expensive synchronizations described sections respectively unexpected result incrementalization led discover simplifications lamport original algorithm new number clause optional defaults local node respectively new process set calling setup method call start process starts execution run method statement sending messages form sends message value expression mexp process set processes value expression pexp send mexp pexp multiple yield points without using method definition invocations syntactic sugar receive handled one yield point written point synchronization associated actions expressed using general nondeterministic await statements simple await statement one two forms waits value expression bexp become true first form waits timeout time period second form await bexp await timeout message value convention tuple whose first component string called tag indicating kind message general nondeterministic await statement form waits values expressions bexpk become true timeout time period nondeterministically selects one statements stmtk stmt whose corresponding conditions satisfied execute timeout clauses optional control flows handling received messages key idea use labels specify program points control flow yield handling messages resume afterwards three main constructs used specifying yield points handling received messages synchronization yield point preceding statement form identifier label specifies point program place control yields handling unhandled messages resumes afterwards await bexpk stmtk timeout stmt await statement must preceded yield point handling messages waiting yield point specified explicitly default message handlers executed point constructs make easy specify process flow control also responding messages also easy specify process responds messages example writing receive definitions run method containing await false label optional omitted yield point explicitly referred handler definitions defined next handler definition also called receive definition form handles yield points labeled messages match mexpi sent pexpi mexpi pexpi parts tuple pattern previously unbound variables pattern bound corresponding components value matched sequence statements executed matched messages synchronization conditions using queries synchronization conditions conditions expressed using comprehensions sets processes sequences messages queries used commonly distributed algorithms make complex synchronization conditions clearer easier write complexity distributed algorithms measured round complexity message complexity time complexity local processing quantifications especially common directly capture truth values synchronization conditions discovered number errors initial programs written using aggregations place quantifications developed method systematically optimize quantifications example regularly expressed larger elements max either forgot handle case empty handled hoc fashion naive use aggregation operators like max may also hinder generation efficient implementations receive mexpk pexpk clauses optional defaults process yield points respectively clause used message automatically extended process sender tuple pattern tuple component expression variable possibly prefixed wildcard recursively tuple pattern expression variable prefixed means corresponding component tuple matched must equal value expression variable respectively pattern matching succeed variable prefixed matches value becomes bound corresponding component tuple matched wildcard written matches value support receive mimics common usage pseudocode allowing message handler associated define operations sets operations sequences except elements processed order square brackets used place curly braces agg sexp agg exp sexpk bexp query forms also tuple quantification query one two forms pattern variables bound corresponding components matched elements value sexpi omit bexp bexp true called existential universal quantifications respectively plus set whose values bound query query every variable must reachable parameter recursively leftside variable membership clause whose variables reachable given values parameters query returns true iff respectively combinations values variables satisfy membership clauses sexpi expression bexp evaluates true existential quantification returns true variables query also bound combination values called witness satisfy membership clauses condition bexp use empty set use element addition deletion respectively use membership test negation respectively assume hashing used implementing sets expected time set initialization element addition removal membership test consider operations involve iterations sets sequences expensive iteration set sequence incurs cost linear size set sequence quantifications comprehensions aggregations considered expensive distalgo sequences received sent containing messages received sent respectively process sexpk bexp sexpk bexp example following query returns true iff element greater element sequence received updated yield points message arrives handled execution reaches next yield point adding message received running matching receive definitions associated yield point use received interchangeably received mean message process received optional specified message received automatically extended process sender another example following query containing nested quantification returns true iff element greater element additionally query returns true variable bound element greater element comprehension query form given values parameters query returns set values exp combinations values variables satisfy membership clauses sexpi condition bexp exp sexpk bexp sequence sent updated send statement message sent process added sent use sent interchangeably sent mean message process sent optional specified process sent specified send statement implemented straightforwardly received sent create huge memory leak grow unboundedly preventing use practical programming method remove maintaining auxiliary values needed incremental computation example following query returns set products greater abbreviate sexp bexp sexp configuration one specify channel types handling messages configuration items specifications declarative algorithms expressed without unnecessary implementation details describe basic kinds configuration items first one specify types channels passing messages example following statement configures channels fifo bexp aggregation also called aggregate query query one two forms agg aggregation operator including count sum min max given values parameters query returns value applying agg set value sexp first form multiset values exp combinations values variables satisfy membership clauses sexpi condition bexp second form configure channel fifo class extends process def setup options channel include reliable reliable fifo either fifo reliable included tcp used process communication otherwise udp used general channels also configured separately messages set processes set processes one specify much effort spent processing messages yield points example def mutex task run task mutual exclusion request fig send request self request self wait req others acks await request fig self implies self received ack task critical section release request self fig send release self receive request request send ack self fig receive release request request fig def run def task mutex task configure handling configures system handle messages yield point default another example one specify time limit one also specify different handling effort different yield points logical clocks used many distributed algorithms one specify logical clock lamport clock used configure clock lamport configures sending receiving messages update clock appropriately call returns current value logical clock overall distalgo program consists set process definitions method main possibly conventional program parts method main specifies configurations creates sets starts set processes distalgo language constructs used process definitions method main implemented according semantics described conventional program parts implemented according conventional semantics set processes set pending requests def main configure channel reliable configure clock lamport new main method process tasks process define critical section task run task mutual exclusion tasks process main method application tasks application fifo use reliable fifo channel use lamport clock create processes class pass process processes start run method process tasks application figure original algorithm lines complete program distalgo language constructs constructs use languages mostly use python syntax indentation scoping elaboration comments etc succinctness except exp assignment conventions java keyword extends subclass keyword new object creation omission self equivalent java ambiguity ease reading compiling executable programs compilation generates code create processes specified machine take care sending receiving messages realize specified configuration particular inserts appropriate message handlers yield point processes sending messages process creation compiled creating process specified default machine private memory space fields process implemented using two threads main thread executes main flow control process helper thread receives enqueues messages sent process constructs involving set processes new easily compiled loops sending message process compiled calls standard message passing api sequence sent used program also insert calling method remote process object compiled remote method call example figure shows lamport algorithm expressed distalgo algorithm figure corresponds body mutex two receive definitions lines total rest program lines total shows algorithm used application execution application starts method main configures system run lines method mutex two receive definitions executed needed follow five rules figure lines recall implicit yield point await statement note figure meant replace figure realize figure precisely executable manner figure meant compared lowerlevel specifications programs control flows handling received messages yield point compiled call message dler method updates sequence received received used program executes bodies receive definitions whose clause includes quantifications dominantly used writing synchronization conditions assertions specifications programs unfortunately implemented straightforwardly quantification introduces cost factor linear size collection quantified optimizing expensive quantifications general difficult main reason used practical programs even logic programs programmers manually write complex code difficulty comes expensive enumerations collections complex combinations join conditions address challenge converting quantifications aggregations optimized systematically using previously studied methods however quantification converted multiple forms aggregations one use depends kinds updates must handled query incrementalized updates direct conversion nested quantifications nested aggregations lead much complex incremental computation code asymptotically worse time space complexities maintaining intermediate query results note existential quantification convert efficient aggregation witness needed witness needed incrementally compute set witnesses cisely receive definition compiled method takes message argument matches message patterns receive clause ing succeeds binds variables matched pattern appropriately executes statement body receive definition method compiled yield point following message handled execute received used program call methods generated receive definitions whose clause includes remove message queue await statement compiled synchronization using blocking use blocking wait new message arrives timeout specified await reached configuration configuration options taken account compilation straightforward way libraries modules used much possible example fifo reliable channel specified compiler generate code uses tcp sockets converting quantifications present converted forms describe forms use discuss updates must handled correctness rules presented proved manually using logic set theory rules ensure value resulting query expression equals value original quantified expression table shows general rules converting single quantifications equivalent aggregations use aggregation operator count converting universal quantifications either rule could used choice affect asymptotic cost small constant factors rule requires maintaining count rule requires computing latter generally faster unless count already needed purposes certainly faster bexp simplified bexp negation rules table general bexp boolean expression converting single quantifications nested quantifications converted one time inside results may much complicated necessary example incrementalizing expensive synchronizations incrementalization transforms expensive computations efficient incremental computations respect updates values computations depend identifies expensive queries determines updates may affect query result transforms queries updates efficient incremental computations much incrementalization studied previously discussed section new method systematic handling quantifications synchronization expensive queries especially nested alternating universal existential quantifications quantifications containing complex order comparisons systematic handling updates caused sending receiving handling messages way updates program result drastic reduction time space complexities expensive computations using quantifications expensive computations general involve repetition including loops recursive functions comprehensions aggregations quantifications collections optimizations studied loops less recursive functions comprehensions aggregations least quantifications basically corresponding frequently constructs traditionally used programming however queries increasingly used programming bexp would converted using rule count bexp using rule count count bexp count simpler conversion possible example using rule table described next parameter values basic updates assignments query parameters exp query parameter updates objects collections used query objects updates expressed field assignments exp collections updates expressed initialization empty element additions removals distributed algorithms distinct class important updates caused message passing updates caused two ways table rules converting single quantifications quantification aggregation bexp count bexp count bexp count bexp count bexp table shows general rules converting nested quantifications equivalent aggregations use aggregation operator count rules yield much simpler results repeated use rules table example rule table yields much simpler result using two rules table previous example significantly rules generalize number quantifier rules generalize number quantifiers one alternation encountered complicated quantifications algorithms found well known one alternation rarely used commonly used quantifications converted aggregations example twelve different algorithms expressed distalgo total quantifications occurrence one alternation table shows general rules converting single quantifications single order comparison linear order equivalent queries use aggregation operators max min rules useful max min general maintained incrementally log time space overhead additionally element additions max min maintained efficiently time space table shows general rules decomposing boolean combinations conditions quantifications obtain quantifications simpler conditions particular boolean combinations order comparisons conditions transformed extract quantifications single order comparison rules table applied boolean combinations inner quantifications conditions transformed extract directly nested quantifications rules table applied example sending receiving messages updates sequences sent received respectively incrementalization code generated described section explicitly perform updates handling messages code receive definitions updates variables parameters queries computing synchronization conditions used compute values parameters established updates determined using previously studied analysis methods incremental computation given expensive queries updates query parameters efficient incremental computations derived large classes queries updates based language constructs used using library rules built existing data structures aggregations converted quantifications algebraic properties aggregation operators exploited efficiently handle possible updates particular resulting aggregate query result obtained time incrementally maintained time per update sets maintained affected plus time evaluating conditions aggregation per update total maintenance time element addition deletion query parameter least linear factor smaller computing query result scratch additionally aggregation operators max min used element additions space overhead note max min used naively element deletions may unnecessary overhead space log maintenance time per update using sophisticated data structures maintain max min element deletion incremental computation improves time complexity total time repeated expensive queries larger repeated incremental maintenance generally true incrementalizing expensive synchronization conditions expensive queries synchronization conditions need evaluated repeatedly relevant update message history condition becomes true incremental maintenance update least linear factor faster single message updates slower generally computing scratch bexp implies converted using rule table bexp converted using rule table bexp min bexp updates caused message passing recall parameters query variables query whose values bound query updates may affect query result include updates query parameters also updates objects collections reachable table rules converting nested quantifications nested quantifications bexp count bexp count bexp count count bexp count count existential universal aggregation quantifications conditions contain max order comparisons deletions sets sequences whose elements compared rules table used space overhead linear sizes sets maintained aggregated min max quantifications conditions contain order comparisons additions sets sequences whose elements compared rules table used extract single quantified order comparisons rules table used convert extracted quantifications case space overhead reduced constant min aggregation min max nested quantifications one level nesting rules table used extract directly nested quantifications rules table used resulting incremental maintenance overhead maintaining structure done overhead maintaining structure conditions contain order comparisons rules table used extract single quantified order comparisons rules table used reduce overhead logarithmic time linear space min max table rules decomposing conditions extract quantified comparisons quantification implies implies decomposed quantifications general multiple ways conversion may possible besides small differences rules table rules table particular nested quantifications two alternations one must choose two alternating quantifiers transform first using rule table encountered queries studied aspect general method transform ways possible obtain time space complexities result choose one best time space complexities calculated using cost model set operations given section number possible ways exponential worst case size query query size usually small constant bexp bexp count bexp count bexp bexp allow efficient incremental computation given updates method transforms quantification follows table rules single quantified order comparison aggregation table summarizes incremental computation methods aggregate queries methods expressed incrementalization rules query program matches query form table update parameter query program matches update form table transform query corresponding replacement insert update corresponding maintenance fresh variables introduced different query hold query results auxiliary data structures third rule data structure stores argument set max supports priority queue operations example program figure three quantifications used synchronization condition await statement two nested condition copied except ack received used place received ack table incrementalization rules count max converting quantifications aggregations described using tables proceeds follows first conjunct universal quantification converted using rule table contains order comparison elements element deletions rule used slightly simpler negated condition simplified second conjunct nested quantification converted using rule table resulting expression query count updates replacement number inserted maintenance number number number cost cost query max updates replacement maximum inserted maintenance maximum maximum maximum cost cost request self implies self ack received count request self count ack received count query replacement cost max updates inserted maintenance cost new new log log rule min similar rule max updates parameters first conjunct additions removals requests also assignment updates parameters second conjunct additions ack messages received assignment initial assignment incremental computation introduces variables store values three aggregations converted query transforms aggregations use introduced variables incrementally maintains stored values updates follows yielding figure overall incrementalization algorithm introduces new variables store results expensive queries subqueries well appropriate additional values forming set invariants transforms queries subqueries use stored query results additional values transforms updates query parameters also incremental maintenance stored query results additional values particular queries nested inner queries transformed outer queries note comprehension bexp incrementalized respect changes parameters boolean expression bexp well addition removal elements bexp contains nested subqueries subqueries transformed incremental maintenance query results become additional updates enclosing query end variables computations dead transformed program eliminated particular sequences received sent eliminated appropriate queries using compiled message handlers store maintain values needed incremental evaluation synchronization conditions first conjunct store set value count value two variables say earlier respectively first conjunct becomes assigned new value let earlier let size taking time amortized time request earlier served request added defined self holds add request earlier increment taking time similarly deletion test definedness undefined inserted variable might defined scope maintenance code note request self particular added removed earlier updated self self trivially false second conjunct store set value two count values three variables say responded total respectively conjunct becomes total initialized setup class extends process def setup count assign total size taking time done process assigned new value let responded let taking time ack message added received associated conditions hold increment taking time test definedness omitted maintenance receiving ack messages always defined small optimization incorporated incrementalization rule could done analysis covers distributed data flows note incrementalization uses basic properties primitives libraries properties incorporated incrementalization rules running example property used call returns timestamp larger existing timestamp values thus assignment method mutex earlier responded incrementalization rule maintaining earlier specifies update maintenance earlier similarly maintaining responded simplifications could facilitated analyses determine variables holding logical times sets holding certain element types incrementalization rules use program analysis results conditions figure shows optimized program incrementalization synchronization condition lines figure lines comments new except synchronization condition await statement simplified synchronization condition takes time compared computed scratch tradeoff amortized time overhead updates receiving ack messages using based representation sets maintaining earlier responded done using one bit process note sequence received used synchronization condition figure longer used incrementalization values needed evaluating synchronization condition stored new variables introduced earlier responded total drastic space improvement unbounded received linear number processes total num processes def mutex task request count earlier send request self request self await total task release request self send release receive request undefined defined self comparison conjunct request earlier earlier request add earlier increment request send ack self receive ack responded receive release request undefined defined self comparison conjunct request earlier earlier request delete earlier decrement request set num set num pending earlier requests pending earlier requests responded processes responded processes use maintained results self new message handler comparison conjunct membership conjunct responded already add responded increment figure optimized program incrementalization definitions run main figure consider first conjunct synchronization condition await statement figure copied request self implies self one might written following instead seems natural especially universal quantification supported example naive use aggregation operator min note resulting program figure need use queue even though queue used original description figure variable simply set thus element addition removal takes time show min used naively sophisticated data structure supporting priority queue needed incurring log time update instead time figure additionally query using min correct special care must taken deal case argument min empty min undefined self min request self however incorrect argument min may empty case min undefined instead resorting commonly used special values maxint hoc error prone general empty case added first disjunct disjunction class extends process def setup count new request self self min request self def mutex task request send request self request self await self number total task release request self send release receive request add data structure request send ack self receive ack responded number receive release request delete data structure request fact original universal quantification first conjunct await statement converted exactly disjunction using rule table rule table method consider conversion leads worse resulting program figure shows resulting program incrementalization synchronization condition uses disjunction stores argument set min supports priority queue operations commented lines new compared figure except synchronization condition await statement simplified program appears shorter figure long complex code maintaining data structure included fact similar figure except used maintained instead earlier program figure still drastic improvement original program figure synchronization condition reduced time received removed figure difference maintaining incrementalizing min element addition deletion takes log time opposed time maintaining earlier figure total num processes data structure maintaining requests processes set responded processes num responded processes use maintained results self new message handler comparison conjunct membership conjunct responded already add responded increment number figure optimized program use min incrementalization definitions run main figure simplifications original algorithm consider original algorithm figure note incrementalization determined need process update auxiliary values request figures based discovered manually updates process request affect two uses lines figure use line figure remove figures addition remove lines figure remove test self becomes always true synchronization condition yielding simplified original algorithm furthermore note remaining updates figure merely maintain pending requests others remove lines entire receive definition release messages using first conjunct await statement clock fifo channels incrementalization rules maintaining result new condition incorporate property similar way described figure except could facilitated also analysis determines component received message holding sender message class extends process def setup def mutex task request send request self await received request received release implies self received ack task release send release self receive request send ack self received request received release implies self figure simplified algorithm definitions run main figure figure shows resulting simplified algorithm incrementalizing program yields essentially programs figures except needs use property message added received messages process received smaller timestamp property follows use logical implementation experiments developed prototype implementation compiler optimizations distalgo evaluated plementing set distributed algorithms described previously also used distalgo teaching distributed algorithms distributed systems students used language system programming assignments course projects summarize results former describe experience latter overview update implementation distalgo implementation takes distalgo programs written extended python applies analyses optimizations especially queries generates executable python code optionally interfaces incrementalizer apply incrementalization generating code applying incrementalization uses methods implementation previous work library incrementalization rules developed manually mostly following systematic method applied automatically using invts set heuristics currently used select best program generated incrementalizing differently converted aggregations extensive implementation distalgo first prototype released gradually improved improved methods implementation incrementalization also developed replace manually written incrementalization rules better select best transformed programs incremental program grows linearly number requests original program compared running times best manually ten programs programming languages running single machine generated distalgo takes twice long python version takes twice long java version takes twice long version takes four times long erlang version python well known slow compared java focused optimizing constant factors erlang significantly faster rest use threads implement processes facilitated functional language however among programs lamport distributed mutual exclusion erlang one besides distalgo whose memory usage fixed number processes grows linearly number requests programming distributed algorithms high level also allowed discover several improvements correctness efficiency aspects algorithms example pseudocode process commander waiting messages containing ballot majority acceptors expressed starting waitfor set initialized acceptors ever loop repeatedly updating waitfor testing message containing ballot arrives test incorrect implemented directly commonly used languages java even python python integer division discards fractional part example test becomes false true distalgo entire code simply written evaluation implementing distributed algorithms used distalgo implement variety distributed algorithms including twelve different algorithms distributed mutual exclusion leader election atomic commit well paxos byzantine paxos multipaxos summarized previously results evaluation using programs follows distalgo programs consistently small ranging lines much smaller specifications programs written languages mostly size also able find algorithms written languages best effort write lamport distributed mutual exclusion programming languages resulted lines lines java lines python lines erlang compared lines distalgo await count received count acceptors using standard majority test correct whether integer float experience teaching distributed algorithms distalgo also helped tremendously teaching distributed algorithms makes complex algorithms completely clear precise directly executable students learn distalgo quickly even small programming assignment despite know python thanks power clarity python particular students distributed systems courses used distalgo dozens course projects implementing core network protocols distributed graph algorithms distributed coordination services chubby zookeeper distributed hash tables kademlia chord pastry tapestry dynamo distributed file systems gfs hdfs distributed databases bigtable cassandra compilation times without incrementalization der seconds intel cpu memory incrementalization times seconds generated code size ranges lines python including lines fixed library code execution time space confirm analyzed totic time space complexities example lamport distributed mutual exclusion total cpu time linear number processes incrementalized program superlinear original program fixed number processes memory usage constant code devoted distributed systems aspects numbers teams chose various languages java erlang elixir variant erlang javascript store distributed processing platform mapreduce others distributed programming features used extensively students process creation setup sending messages control flows receive definitions well await synchronization declarative exception queries message histories students trained many courses handle events imperatively evaluated incrementalization students programs execution efficiency problem overall students experience helps confirm distalgo allows complex distributed algorithms services implemented much easily commonly used languages java summarize two specific instances graduate class fall students initially planned use java course projects familiar wanted strengthen experience using instead using distalgo implementing distributed systems however one programming assignment using distalgo students switched distalgo course projects except one student extensive experience including several years internship microsoft research programming distributed systems last assignment teams implemented extension banking service one language choice teams chose distalgo even though students know python none knew distalgo beginning class words majority students decided implementation type system better distalgo even compared languages experience widely used asked team compare experiences two languages teams consistently reported development distalgo faster easier development language even though students know python project distalgo code significantly shorter surprise java require code even students used existing networking libraries encouraged comparison erlang interesting languages designed support distributed programming teams chose erlang average distalgo erlang code sizes measured line code respectively teams chose average distalgo code sizes respectively student wrote lines compared lines distalgo written several students chose project implementing several optimizations furthermore program incomplete lacking optimizations students distago programs included related work conclusion student distalgo quickly wide spectrum languages notations used describe distributed algorithms one end pseudocode english used gives flow algorithms lacks details precision needed complete understanding end state machine based specification languages used automata completely precise uses control flows make harder write understand algorithms also many notations extremes much precise completely precise also giving control flow raynal pseudocode lamport pluscal however languages notations lack concepts mechanisms building real distributed applications languages executable many programming languages support programming distributed algorithms applications support distributed programming messaging libraries ranging relatively simple socket libraries complex libraries mpi many support remote procedure call rpc remote method invocation rmi allows confirming took lines biggest surprise program order magnitude slower distalgo program several weeks debugging found due improper use library function main contrast student concluded huge advantage distalgo ease programming program understanding mention unexpected performance advantage graduate class fall team two students first implemented banking service two languages distalgo another language choice python excluded python language implementing service closely related languages would less educational service uses chain replication tolerate crash failures service offers simple banking operations get balance deposit withdrawal transfer transfer student wanted research distalgo asked reimplement project distalgo process call subroutine another process without programmer coding details many also support asynchronous method invocation ami allows caller block get reply later programming languages erlang model support message passing process management built language also languages distributed programming argus lynx emerald languages lack constructs expressing control flows complex synchronization conditions much higher level constructs extremely difficult implement efficiently distalgo construct declaratively precisely specifying yield points handling received messages new feature seen languages distalgo support history variables synchronization conditions await timeout programming language simple combination synchronous await asynchronous receive allows distributed algorithms expressed easily clearly much work producing executable implementations formal specifications process algebras automata unity seuss well recently proposed languages distributed algorithms languages meld overlog bloom prologbased language dahl language eventml operational semantics studied recently variant meld called linear meld allows updates encoded conveniently meld using linear logic compilation distalgo executable implementations easy designed distalgo given operational semantics highlevel queries quantifications used synchronization conditions compiled loops straightforwardly may extremely inefficient none prior works study powerful optimizations quantifications efficiency concern main reason similar language constructs whether queries assertions rarely used supported commonly used languages incrementalization studied extensively systematically based languages applying hoc fashion specific problems however systematic incrementalization methods based languages centralized sequential programs loops set languages recursive functions logic rules objectoriented languages work first extend incrementalization distributed programs support synchronization conditions allows large body previous work incrementalization especially sets sequences used optimizing distributed programs quantifications centerpiece logic dominantly used writing synchronization conditions assertions specifications results generating efficient implementations database area despite extensive work efficient implementation queries efficient implementation quantification studied limited scope extremely restricted query forms logic programming handling universal quantification based variants transformations even logic programming systems support universal quantification method first general systematic method incrementalizing arbitrary quantifications although much challenging optimize set queries method combines set general transformations transform aggregations efficiently incrementalized using best previous methods conclude article presents powerful language method programming optimizing distributed algorithms many directions future work formal verification theoretical side generating code languages practical side many additional analyses optimizations particular language high level abstraction also faciliates formal verification programs also generated efficient implementations generated systematic optimizations besides developing systematic optimizations started study formal verification distributed algorithms implementations starting concise descriptions distalgo appendix semantics distalgo give abstract syntax operational semantics core language distalgo operational semantics reduction semantics evaluation contexts abstract syntax abstract syntax defined figures use syntactic sugar sample code use infix notation binary operators notation symbol grammar terminal symbol starts letter symbol grammar symbol starts letter production alternatives separated break means occurrences means occurrences program configuration processclass method processclass class classname extends classname method receivedef receivedef receive label statement receive statement receivepattern pattern instancevariable method def methodname parameter statement defun methodname parameter expression statement instancevariable expression instancevariable new classname instancevariable pattern iterator expression statement statement expression statement else statement iterator statement expression statement expression send tuple expression label await expression statement anotherawaitclause label await expression statement anotherawaitclause timeout expression skip expression literal parameter instancevariable tuple expression unaryop expression binaryop expression expression isinstance expression classname expression expression conjunction expression expression disjunction iterator expression iterator expression tuple expression figure abstract syntax part denotes result applying substitution constructs whose semantics given translation represent substitutions functions variables expressions constructors classes setup methods process classes eliminated translation ordinary methods assign fields objects requirements programs method call field assignment explicitly specify target object translated method call field assignment respectively self method program must named main gets executed instance process class program starts await statement without explicitly specified words associated label empty translated await statement explicitly specified label generating fresh label name replacing empty label await statement inserting every clause class containing await statement label used receive definition must label statement appears class receive definition invocations methods defined using def appear method call statements invocations methods defined using defun appear method call expressions unaryop istuple len binaryop plus select boolean negation test whether value tuple length tuple equality sum select returns component tuple pattern instancevariable tuplepattern tuplepattern patternelement patternelement literal instancevariable iterator pattern expression anotherawaitclause expression statement configuration configuration channelorder channelreliability channelorder fifo unordered channelreliability reliable unreliable classname methodname parameter instancevariable field label literal booleanliteral integerliteral booleanliteral true false integerliteral figure abstract syntax part ellipses common syntactic categories whose details unimportant boolean operators eliminated follows replaced iter replaced iter constants variables prefixed let denote removing prefix quantification rewritten istuple len select select aggregation eliminated translation comprehension followed loop iterates set returned comprehension loop updates accumulator variable using aggregation operator consider loop let iterators containing tuple patterns rewritten iterators without tuple patterns follows consider existential quantification let fresh variable let substitution replaces select variable prefixed let contain indices fresh variables let contain indices variables prefixed let substitution replaces select let contain indices constants variables prefixed let denote removing prefix note may denote set sequence duplicate bindings tuple variables eik filtered set sequence loop rewritten code figure ables witness existential quantifications could added new form statement comprehensions variables prefixed translated comprehensions without prefixing specifically variable prefixed comprehension replace occurrences comprehension occurrences fresh variable add conjunct boolean condition object creation comprehension statements expressions comprehension creating new set parameter must include self values method parameters updated using assignment statements brevity local variables methods omitted core language consequently assignment allowed instance variables comprehensions statically eliminated follows comprehension pattern replaced semantically loop copies contents mutable sequence set immutable tuple iterating ensure changes sequence set loop body affect iteration implementation could use optimizations achieve semantics without copying possible new set brevity among standard arithmetic operations etc include one representative operation abstract syntax semantics others handled similarly wildcards eliminated tuple patterns replacing occurrence wildcard fresh variable remote method invocation invocation method another process process started translated message communication semantics model timeouts await statements simply allowed occur omit concept node process location semantics omit node argument constructor creating instances process classes process location affect aspects semantics notes classname must include process process predefined class defined explicitly process fields sent received method start omit configure handling statements syntax semantics configure handling semantics configure handling options easily added grammar allows receive definitions appear classes extend process receive definitions useless would reasonable make illegal support initialization process parent process access fields another process invoke methods another process latter process started grammar allow labels statements await label statement await treated syntactic sugar label await true skip followed statement require messages tuples inessential restriction slightly simplifies specification pattern matching matching messages patterns classname must include set sequence sets sequences treated objects mutable predefined classes defined explicitly methods set include add del contains min max size methods sequence include add adds element end sequence contains length give semantics explicitly methods others handled similarly process sent sequence contains pairs form message sent process destination process received sequence contains pairs form message received process sender semantic domains tuples treated immutable values mutable objects semantic domains defined figure notation expressions free simplicity treat quantifications expressions existential quantifications binding contains finite sequences values domain set contains finite sets values domain isinstance set istuple len select select else sequence istuple len select select else skip figure translation loop eliminate tuple pattern contains partial functions heaptype semantics even though information function distributed way heap implementation dom domain partial function msgqueue associated process last true false component state contains messages paired sender arrived process yet handled matching receive definitions extended abstract syntax processaddress nonprocessaddress section defines abstract syntax programs val written user figure extends abstract syntax bool int address tuple include additional forms programs may evolve set val evaluation new productions shown productions given carry unchanged val field val setofval seqofval expression address address classname address object statement variable intuple tuple statement processaddress localheap processaddress processaddress figure extensions abstract syntax tuple msgqueue tuple processaddress statement intuple iterates state processaddress statement elements tuple obvious way heap channelstates evaluation contexts processaddress msgqueue evaluation contexts also called reduction contexts used identify next part expression statement evaluated evaluation context expression statefigure semantic domains ellipses used semantic ment hole denoted place next subdomains primitive values whose details standard expression evaluated evaluation conunimportant texts defined figure bool int processaddress nonprocessaddress address tuple val setofval seqofval object heaptype localheap heap channelstates notes transition relations require processaddress nonprocessaddress transition relation expressions form expressions heaptype localheap transition relation statements form state state transition relations implicitly parameterized program needed look method definitions disjoint processaddress heap local heap process address heaptype type object address convenience use single global function val literal address val val expression expression val expression unaryop binaryop expression binaryop val isinstance classname expression pattern expression expression instancevariable statement statement else statement instancevariable statement instancevariable intuple tuple send expression send val await expression statement anotherawaitclause timeout figure evaluation contexts configuration information transition relation expressions defined figure transition relation statements defined figures notation auxiliary functions extends holds iff class descendant class inheritance hierarchy classname new returns new instance new set sequence otherwise methodname classname tion methoddef def holds iff class defines method def definition define def definition nearest ancestor inheritance hierarchy defines localheap heaptype val relation iscopy holds iff value process local heap addresses evaluated respect copy process whose local heap copied whose local heap copied except instead referencing objects references newly created copies objects versions updated reflect creation objects exception process addresses used global identifiers process addresses copied unchanged new copies process objects created give auxiliary definitions formal definition iscopy val let addrs denote set addresses appear objects values reachable respect local heap formally transition rules matches address matches value element val matches label expression statement denotes addrs address dom field val dom addrs dom setofval seqofval addrs val addrs occurrences replaced function matches pattern iff equals example transition rules statements function processaddress statement matches maps process address statement function denotes function except maps denotes empty partial function partial val address address relation subst holds iff obtained replacing occurrence address dom informally maps addresses new objects addresses corresponding old objects formally function whose domain empty set partial function denotes function except mapping sequences denoted angle brackets int concatenation sequences subst bool int address dom dom subst first first element sequence rest sequence obtained removing first element length length sequence similarly object address address relation subst holds iff obtained replacing occurrence address dom sets let set bijections returns otherwise receiveatlabel returns executions execution sequence transitions initial state set initial states defined figure intuitively address initial process address received sequence initial process address sent sequence initial process informally execution statement initially associated process may eventually terminate statement associated process becomes skip indicating nothing left process get stuck statement associated process skip process enabled transitions due unsatisfied await statement error statement contains expression tries select component value tuple statement contains expression tries read value field run forever due infinite loop infinite recursion finally iscopy defined follows intuitively contains addresses newly allocated objects iscopy nonprocessaddress addrs processaddress dom dom dom dom dom dom dom subst val processaddress label localheap receive definition message received label process local heap using receive definition matchrcvdef returns appropriately instantiated body acknowledgments thank michael gorbovitski supporting use invts automatic incrementalization distalgo programs grateful following people helpful comments discussions ken birman andrew black jon brandvein wei chen ernie cohen mike ferdman john field georges gonthier leslie lamport nancy lynch lambert meertens stephan merz porter michel raynal john reppy emin sirer doug smith gene stark robbert van renesse thank anonymous reviewers detailed helpful comments first define auxiliary relations functions relation matchesdeflbl holds iff receive definition either lacks clause clause includes bound returns set variables appear pattern prefixed vars returns set variables appear findsubstpat returns substitution domain vars bound otherwise returns findsubst returns findsubstpat first receive pattern findsubstpat otherwise returns references acar blelloch harper adaptive functional programming acm transactions programming languages systems agha actors model concurrent computation distributed systems mit press matchesdeflbl findsubst matchrcvdef returns body statement appears findsubst otherwise returns allen cocke kennedy reduction operator strength muchnick jones editors program flow analysis pages val processaddress label alvaro condie conway hellerstein sears classname localheap message declare consensus logic language acm sigops received label class process local operating systems review heap receiveatlabel returns set andrews olsson programming statements executed receiving language concurrency practice benjamin cummings context specifically class contains receive definition lee goldstein pillai matchrcvdef letj campbell language large ensembles indepenting receive definitions dently executing nodes proceedings intermatchrcvdef letting national conference logic programming pages springer matchrcvdef receiveatlabel field access dom dom invoke method class self dom methoddef defun invoke method class representative examples true dom set false dom set unary operations true false false true istuple true tuple istuple false tuple len tuple components binary operations true identical value plus int int select int tuple least components component isinstance isinstance true isinstance false disjunction true true false existential quantification sequence set linearization figure transition relation expressions baker bond corbett furman khorlin larson lloyd yushprakh megastore providing scalable highly available storage interactive services proceedings conference innovative database research pages attiya welch distributed computing fundamentals simulations advanced topics wiley edition auerbach goldberg goldszmidt gopal kennedy rao russell language distributed programming proceedings usenix winter technical conference usenix association berkeley orders magnitude bloom programming language http lastest release april accessed january bickford component specification using event classes proceedings international symposium software engineering pages springer badia question answering database querying bridging gap generalized quantification journal applied logic badia van gucht gyssens query languages generalized quantifiers ramakrishnan editor applications logic databases kluwer academic black hutchinson jul levy development emerald programming language proceedings acm sigplan conference history programming languages pages badia debes cao implementation query language generalized quantifiers proceedings international conference conceptual modeling pages springer burrows chubby lock service distributed systems proceedings usenix field assignment skip object creation new skip new dom address processaddress extends process sequential composition skip conditional statement true else false else loop intuple sequence set linearization intuple intuple intuple skip loop else skip invoke method class self dom process set sequence methoddef def invoke method class representative examples allocates local heap sent received sequences new process moves started process new local heap skip sequence sequence sent received extends process inherits start process dom dom nonprocessaddress nonprocessaddress skip dom set skip dom sequence figure transition relation statements part send message one process create copies message sender sent sequence receiver send skip processaddress sent iscopy iscopy send set processes send send set sent fresh variable message reordering channel order unordered program configuration permutation message loss channel reliability unreliable program configuration subsequence arrival message process process remove message channel append message sender pair message queue rest first length handle message yield point remove message sender pair message queue append copy received sequence prepare run matching receive handlers associated label hence must await self hcopyi rest length received iscopy first copy receiveatlabel first linearization await without timeout clause await length true await timeout clause terminated true condition await timeout length true await timeout clause terminated timeout occurs await timeout length false false context rule expressions context rule statements figure transition relation statements part init state processaddress nonprocessaddress nonprocessaddress process sequence sequence processaddress processaddress received sent figure initial states fidge timestamps systems preserve partial ordering proceedings australian computer science conference pages fioravanti pettorossi proietti senni program transformation development verification synthesis programs intelligenza artificiale garg elements distributed computing wiley gautam rajopadhye simplifying reductions conference record acm symposium principles programming languages pages isbn georgiou lynch andjoshua tauber automated implementation complex distributed algorithms specified ioa language international journal software tools technology transfer ghemawat gobioff leung google file system acm sigops operating systems review posium operating systems design implementation pages gorbovitski liu stoller rothamel tekle alias analysis optimization dynamic languages proceedings symposium dynamic languages pages acm press cai facon henglein paige schonberg type analysis data structure selection editor constructing programs specifications pages goyal language theoretic approach algorithms phd thesis department computer science new york university chand liu stoller formal verification distributed consensus proceedings international symposium formal methods pages springer granicz zimmerman hickey rewriting unity proceedings international conference rewriting techniques applications pages springer chang dean ghemawat hsieh wallach burrows chandra fikes gruber bigtable distributed storage system structured data acm transactions computer systems gupta mumick subrahmanian maintaining views incrementally proceedings acm sigmod international conference management data pages kemper moerkotte peithner optimizing queries universal quantification databases proceedings international conference large data bases pages morgan kaufman hansel cleaveland smolka distributed prototyping validated specifications journal systems software cormen leiserson rivest stein introduction algorithms mit press edition hunt konar junqueira reed zookeeper coordination systems usenix annual technical conference page cruz rocha goldstein pfenning linear logic programming language concurrent programming graph structures theory practice logic programming kaynar lynch segala vaandrager theory timed automata morgan claypool edition dean ghemawat mapreduce simplified data processing large clusters communications acm experiment compiler design concurrent programming language master thesis university texas austin decandia hastorun jampani kakulapati lakshman pilchin sivasubramanian vosshall vogels dynamo amazon highly available keyvalue store acm sigops operating systems review kshemkalyani singhal distributed computing principles algorithms systems cambridge university press lakshman malik cassandra decentralized structured storage system acm sigops operating systems review distalgo distalgo language distributed algorithms http beta release september release november lamport time clocks ordering events distributed system communications acm erlang erlang programming language http last released december mattern virtual time global states distributed systems proceedings international workshop parallel distributed algorithms pages northholland lamport specifying systems language tools hardware software engineers addisonwesley lamport pluscal algorithm language proceedings international colloquium theoretical aspects computing pages springer maymounkov kademlia information system based xor metric systems pages larson erlang concurrent programming communications acm mpi message passing interface forum http last released june liskov distributed programming argus communications acm mar nakamura incremental computation complex object queries proceedings acm sigplan conference programming systems languages applications pages liu systematic program design clarity efficiency cambridge university press liu stoller dynamic programming via static incrementalization symbolic computation paige simulation set machine ram proceedings international conference computing information pages canadian scholars press liu stoller datalog rules efficient programs time space guarantees acm transactions programming languages systems paige koenig finite differencing computable expressions acm transactions programming languages systems liu stoller gorbovitski rothamel liu incrementalization across object abstraction proceedings acm conference programming systems languages applications pages petukhin programs universally quantified embedded implications proceedings international conference logic programming nonmonotonic reasoning pages springer liu stoller rothamel optimizing aggregate array computations loops acm transactions programming languages systems prl project eventml http whatiseventml lastest release september accessed january liu wang gorbovitski rothamel cheng zhao zhang core access control efficient implementations transformations proceedings acm sigplan workshop partial evaluation program manipulation pages pugh teitelbaum incremental computation via function caching conference record annual acm symposium principles programming languages pages ramalingam categorized bibliography incremental computation conference record annual acm symposium principles programming languages pages liu gorbovitski stoller language framework transformations proceedings international conference generative programming component engineering pages acm press raynal distributed algorithms protocols wiley liu stoller lin executable specifications distributed algorithms proceedings international symposium stabilization safety security distributed systems pages springer raynal communication agreement abstractions asynchronous distributed systems morgan claypool raynal distributed algorithms systems springer liu stoller lin gorbovitski clarity efficiency distributed algorithms proceedings acm sigplan conference programming systems languages applications pages rothamel liu generating incremental implementations queries proceedings international conference generative programming component engineering pages acm press liu brandvein stoller lin demanddriven incremental object queries proceedings international symposium principles practice declarative programming pages acm press rowstron druschel pastry scalable decentralized object location routing systems proceedings international conference distributed systems platforms middleware pages springer lopes navarro rybalchenko singh applying prolog develop distributed systems theory practice logic programming july saha ramakrishnan incremental evaluation tabled logic programs proceedings international conference logic programming pages springer lynch distributed algorithms morgan kaufman scott lynx distributed programming language motivation design experience computer languages serbanuta rosu meseguer rewriting logic approach operational semantics information computation shvachko kuang radia chansler hadoop distributed file system proceedings ieee symposium mass storage systems technologies pages ieee press stoica morris karger kaashoek dabek balakrishnan chord scalable lookup protocol internet applications transactions networking swift warren xsb system version http latest release july tel introduction distributed algorithms cambridge university press edition van renesse altinbuken paxos made moderately complex acm computing surveys van renesse schneider chain replication supporting high throughput availability proceedings usenix symposium operating systems design implementation pages usenix association willard efficient processing relational calculus expressions using range query theory proceedings acm sigmod international conference management data pages willard algorithm handling many relational calculus queries efficiently journal computer system sciences wright felleisen syntactic approach type soundness information computation zhao huang stribling rhea joseph kubiatowicz tapestry resilient overlay service deployment ieee journal selected areas communications
6
ternary neural networks quantization may naveen abhisek dheevatsa dipankar bharat pradeep parallel computing lab intel labs bangalore india parallel computing lab intel labs santa clara abstract propose novel quantization fgq method ternarize pretrained full precision models also constraining activations using method demonstrate minimal loss classification accuracy topologies without additional training provide improved theoretical formulation forms basis higher quality solution using fgq method involves ternarizing original weight tensor groups weights using achieve accuracy within baseline full precision result respectively eliminating multiplications results enable full inference pipeline best reported accuracy using ternary weights imagenet dataset potential improvement performance also smaller networks like alexnet fgq achieves results study impact group size performance accuracy group size eliminate multiplications however introduces noticeable drop accuracy necessitates fine tuning parameters lower precision address activations ternary weights improving accuracy within full precision result additional training overhead final quantized model run full compute pipeline using weights potential improvement performance compared baseline models introduction today deep learning models achieve results wide variety tasks including computer vision natural language processing automatic speech recognition reinforcement learning mathematically involves solving optimization problem order millions parameters solving optimization problem also referred training neural network process current networks requires days weeks trained network evaluates function specific input data referred inference compute intensity inference much lower training owing fact inference done large number input data total computing resources spent inference likely dwarf spent training large somewhat unique compute requirements deep learning training inference operations motivate use customized low precision arithmetic specialized hardware run computations efficiently possible cases requires partial full training network low precision training allows network implicitly learn low precision representation along inherent noise however introduces significant resource overheads prohibitive many applications specifically involving edge devices reducing precision weights activations significant implication system design allows increasing compute density also reduce pressure memory current solutions focused compressing model going low binary weights allows storing model limited local memory however activations input need fetched external memory camera fetching data contributes majority system power consumption hence reducing size activations essential efficient utilization available computational resources solutions using lower precision representation activations however necessitate specialized hardware efficient implementation widespread adoption deep learning across various applications autonomous driving augmented reality etc increased demand inference tasks done edge devices efficiently address aforementioned system application requirements general trend move towards full lower precision inference pipeline evident advent sub hardware google tpu main stream cpu offerings also software support inference popular frameworks tensorflow theano compute libraries like nvidia paper focus enabling sub inference pipeline using ternary weights activations minimal yet achieving near accuracy rationale behind approach carefully convert weights distance weights small consequently weights remain neighborhood weights search space network parameters expect generalize similar manner despite summarize contributions based improved theoretical formulation propose novel quantization fgq method convert models ternary representation minimal loss test accuracy without ternary weights achieve classification accuracy activations activations imagenet dataset using model best knowledge highest reported accuracies category imagenet dataset demonstrate general applicability fgq results smaller models alexnet also show efficacy using fgq training low precision study using different group sizes group filters reduce number multiplications one every additions thus significantly reducing computation complexity potential improvement baseline full precision models rest paper organized follows discusses related work ternary weights low precision inference contrast fgq describes fgq formulation theoretical basis method followed includes experimental results related discussion finally conclude summarizing implications fgq results future research directions related work deep learning inference using weights activations topic many researchers experimented custom data representations perform deep learning tasks shown significant benefits general purpose floating point representation show dynamically scaled fixed point representation used speed convolution neural networks using general purpose cpu hardware carefully choosing right data layout compute batching using implementation optimized target hardware show improvement aggressively tuned floating point implementation https done comprehensive study effect low precision fixed point computation deep learning successfully trained smaller networks using fixed point specialized hardware suggests fixed point representations better suited low precision deep learning also recent efforts exploring floating point representation however schemes additional overhead reduced precision since exponent replicated value whereas fixed point representation using single shared exponent improves capacity precision typically deep neural networks reduced desired preserve numerical precision since loss range augmented dynamic scaling shared exponent commonly low precision networks designed trained scratch leveraging inherent ability network learn approximations introduced low precision computations prohibitive applications rely using previously trained models use cases typical many edge device deployments address cases fgq developed motivation able achieve accuracies without training hence enabling direct use models requirement results quantization scheme quite complex making widely applicable making also easily usable former case training scratch many recent reduced precision work look low precision weights retaining activations full precision using low precision also activations essential realize full benefits using ternary weights hardware needs operate throughput close precision weights better throughput compared using weights achieved would hard achieve activations full precision streaming activations main memory rate requires much higher bandwidth compute engine needs much wider deliver desired throughput increase area power budget desirable designing edge devices hence reducing size activations essential reducing compute requirements edge using activations dramatically reduces design requirements edge opens possibility achieving throughput improvement propose low precision networks binary weights retaining activations full precision use stochastic binarization scheme achieving sota accuracies smaller mnist svhn demonstrate accuracies large imagenet using alexnet topology also demonstrate variant binary weights activations computations simplified operations significant loss accuracy lower precision activations also used use weights activations smaller networks larger networks use activations binary weights showing reasonable accuracies however use specialized data representation requiring custom hardware efficient implementation solutions employ tailored approach different precision weights activations gradients implemented hardware introduces theoretical formulation ternary weight network using threshold based approach symmetric threshold one scaling factor layer provide approximation optimal ternary representation assuming weights follow gaussian distribution however one scaling factor per layer may approximate network well model capacity limited case increase model capacity modify solution use two symmetric thresholds two scaling factors separately positive negative weights however despite improving accuracy approach typically makes inferencing inefficient requiring multiple passes positive negative values hence increasing bandwidth requirements proposed incremental quantization approach aims find optimal representation using iterative method constraining weights either powers using representation activations full precision aforementioned implementation require partial full training network low precision alternatively used log quantization method models achieved good accuracy tuning bit length layer without achieving accuracy imagenet dataset deeper networks without training low precision weights activations still challenge work attempt address problem improve existing approaches ternary conversion trained network goal convert trained weights ternary values without use threshold based approach similar element sign otherwise error optimal ternary representation follows argmin size hypothesize weights learn different types features may follow different distributions combining weights together represents mixture various distributions ternarizing using single threshold magnitude may preserve distributions individual weights consequently many weights approximated poorly totally pruned leading loss valuable information learn may able compensate loss information train network low precision motivates use quantization technique involving multiple scaling factors order increase model capacity lead better preservation distributions learned filters moreover hypothesize positive negative weight distributions always symmetric around mean refinement solution maybe possible using two separate thresholds positive negative weights respectively along scaling factor ternarize weights formulation computing separate weight compensates information loss better preserves underlying distributions however solution showing significant improvement accuracy reduce number multiplications leading less efficient implementation therefore seek find achieving higher accuracy reducing total number multiplications propose quantization approach creates groups weights ternarizes group independently let consider weights represented vector partition set indices disjoint subsets cardinality decompose orthogonal vectors component otherwise clearly ternarize orthogonal component components pruning never turns following orthogonality holds follows given group filters argminkw argminkw therefore need solve independent formulation allows better ternary approximation original weights ensuring remain within neighborhood original solution complex search space parameters despite consequently expect solution ternary counterpart generalize similar manner model capacity point view three distinct values ternary weight vector without grouping groups however represent distinct values thus increasing model capacity linearly number groups solve using approach following way given vector elements use separate thresholds positive negative weights along one scaling factor ternarize let want solve argmin following analytical solution argmax note reproduces formulation argmax advantage formulation smaller independent solved efficiently using methods achieve better approximation however also explore analytical solutions establish theoretical veracity approach assuming magnitude learned weights follow exponential distribution parameter analytically derive optimal following lemma lemma using notations exp relative error improvement analysis see need higher threshold value prune larger number improvement error gaussian assumption smaller elements intuitive shape model distributions ically distributions reality ever may appropriate use single distribution model weights layers neural network apply smirnov test measure identify appropriate reference distribution choose gaussian nential find accordingly approxlayers imate distribution exponential one pruning smaller figure improvement theoretical elements gives exponential ternary error one per layer gaussian asimation smaller use sumption choosing appropriate distribution maximum likelihood functions estimate using test imagenet dataset parameters distributions ppn gaussian estimated rms exponential case estimated paramepn ter based refined analysis observe significant improvement theoretical ternary error gaussian assumption figure interesting observe earlier convolution layers trained imagenet magnitude weights follow exponential distribution later layer weights gaussian weight grouping method agnostic weights grouped leverages consequence grouping allows solving independent efficiently specifics grouping mechanism memory layout used accessing groups weights independent problem explore primary objective grouping minimize dynamic range within group split weights way smaller groups uniform distribution helps reducing complexity finding optimal solution independent using either analytical techniques however realize full performance potential ternarization essential ensure grouping mechanism introduce significant overhead similarity based clustering algorithms despite better finding optimal grouping weights may even lead better accuracy friendly efficient implementations software hardware random grouping elements memory locations leads irregular memory accesses longer latencies gather arbitrarily grouped weights use common partial accumulation output based empirical observations conclude using static groups weights partitioned along input channels achieves best accuracy element multiple filters along significantly less variance since correspond similar input features hence grouping elements results reduced dynamic range within group figure static grouping grouping also easily lends efficient elements contiguous filters along implementation using existing hardware sion consists scaling factors software using layout weight tensor groups elements accessed contiguous memory locations since elements along accumulate output feature layout also amenable efficient vectorization along figure shows example grouping scheme applied filters group ternary filters scaling factors corresponding element filter experimental results experimental results focused using dataset demonstrate efficacy method large sophisticated models using weights activations extended study applying fgq activations help reduce precision activations show results comparable activations tested networks towards establishing broader applicability fgq demonstrate accuracy also alexnet convolution data flow weight tensor convert ternary tensor activation tensor convert emulation library convert weight tensor emulating range precision tensor convert activation tensor emulating range precision output setup consists modified version intel confidential caffe emulates dynamic figure schematic describing low fixed point dfp computations described sion experimental setup caffe emulate finefig use accumulator low grained quantization fgq ternary weights precision computations minimize chances activations overflow split weights groups elements using mechanism described use technique compute floating point values threshold group scaling factors quantized fixed point weights stored memory format described activations quantized performing convolution operation outputs converted appropriately rounded passed next layer experiments indicate essential use higher precision first layer minimize accumulation quantization loss also observe using parameters batch normalization layers leads loss accuracy due shift variance introduced quantization prevent loss recomputing batch normalization parameters inference phase compensate shift variance explored using different group sizes experiments show fgq group size achieves highest accuracy potential performance benefit applied model achieves accuracy within results activations reduced accuracy drops marginally performs equally well achieving accuracy result best knowledge please fixed point dynamic fixed point used interchangeably table classification accuracy imagenet dataset achieved using fgq without results alexnet using compared best published results networks baseline low precision training alexnet inq dlac dorefa low precision highest reported accuracies using imagenet dataset using networks understand general applicability method wider range networks apply fgq smaller alexnet model applied alexnet model achieves accuracy without away baseline result see reduction accuracy previously published results directly compared fgq perform quantizaion models work hence compare closest terms networks used target precision alexnet result using comparable previously published result away baseline using also employing training full precision gradients table comparison previous reported results using weights using ternary weights report slightly better absolute numbers numbers relatively better results use activations train network low precision achieve numbers without low precision training reduced precision activation results fgq still competitive similar aforementioned additional low precision training fgq able significantly improve accuracy get closer full precision results outlined next section along associated performance implications discussion order realize full performance potential ternary networks inference platform needs operate throughput close precision weights would increase amount memory bandwidth required stream activations compute engine much wider deliver desired compute throughput building solution around activations would prohibitive terms areas power requirements whereas amenable build solution activations shows performance accuracy fgq based inference design model projects lower bound performance potential based percentage fma operations converted ternary accumulations group size ideal case equal total number weights layer best case performance potential compared baseline performance group size fma operations performed ternary using slightly larger replace fma operations ternary losing additional accuracy group size fma operations replaced ternary accumulations resulting potential improvement performance performance comes cost significant drop accuracy using larger groups weights results poor ternary approximation model consequently ternary solution moves away local optima display different generalization behavior noted works use slight variations hence slightly different baseline accuracies baseline full precision accuracy accuracy baseline speedup relative baseline training error accuracy epochs figure performance accuracy fgq based inference design weights imagenet dataset trained group size imagenet dataset recover accuracy lost ternarization initialized network model parameters network reduced learning rate order avoid exploding gradients retaining hyper parameters training performed gradient updates full precision training recover lost accuracy achieved bringing within baseline accuracy shows reduction training error improvements validation accuracy conclusion propose ternarization method exploits local correlations dynamic range parameters minimize impact quantization overall accuracy demonstrate near sota accuracy imagenet using models quantized networks without using ternary weights activations results within full precision accuracy using activations see drop accuracy best knowledge highest reported accuracies using ternary weights activations weight grouping based approach allows obtain solutions tailored specific hardware well used general purpose hardware based accuracy performance requirements smaller group sizes achieve best accuracy use computations ternary operations simple additions better suited implementation specialized hardware larger group sizes suitable current general purpose hardware larger portion computations low precision operations although comes cost reduced accuracy gap may bridged additional low precision training shown final quantized model efficiently run full compute pipeline thus offering potential benefit continue actively work closing current accuracy gap exploring low precision training extensions fgq method also looking theoretical exploration better understand formal relationship weight grouping final accuracy attempt establish realistic bounds given requirements references yoshua bengio ian goodfellow aaron courville deep learning book preparation mit press matthieu courbariaux itay hubara daniel soudry ran yoshua bengio binarized neural networks training deep neural networks weights activations constrained arxiv preprint jia deng wei dong richard socher kai imagenet hierarchical image database computer vision pattern recognition cvpr ieee conference pages ieee tim dettmers approximations parallelism deep learning arxiv preprint suyog gupta ankur agrawal kailash gopalakrishnan pritish narayanan deep learning limited numerical precision icml pages song han jeff pool john tran william dally learning weights connections efficient neural network advances neural information processing systems pages kaiming xiangyu zhang shaoqing ren jian sun deep residual learning image recognition proceedings ieee conference computer vision pattern recognition pages itay hubara matthieu courbariaux daniel soudry ran yoshua bengio binarized neural networks advances neural information processing systems pages itay hubara matthieu courbariaux daniel soudry ran yoshua bengio quantized neural networks training neural networks low precision weights activations arxiv preprint yangqing jia evan shelhamer jeff donahue sergey karayev jonathan long ross girshick sergio guadarrama trevor darrell caffe convolutional architecture fast feature embedding arxiv preprint jouppi google supercharges machine learning tasks tpu custom chip google blog may alex krizhevsky ilya sutskever geoffrey hinton imagenet classification deep convolutional neural networks advances neural information processing systems pages fengfu zhang bin liu ternary weight networks arxiv preprint zhouhan lin matthieu courbariaux roland memisevic yoshua bengio neural networks multiplications arxiv preprint daisuke miyashita edward lee boris murmann convolutional neural networks using logarithmic data representation arxiv preprint mohammad rastegari vicente ordonez joseph redmon ali farhadi imagenet classification using binary convolutional neural networks eccv olga russakovsky jia deng hao jonathan krause sanjeev satheesh sean zhiheng huang andrej karpathy aditya khosla michael bernstein imagenet large scale visual recognition challenge international journal computer vision marcel simon erik rodner joachim denzler imagenet models batch normalization arxiv preprint yaman umuroglu nicholas fraser giulio gambardella michaela blott philip leong magnus jahre kees vissers finn framework fast scalable binarized neural network inference arxiv preprint vincent vanhoucke andrew senior mark mao improving speed neural networks cpus proc deep learning unsupervised feature learning nips workshop volume page citeseer ganesh venkatesh eriko nurvitadhi debbie marr accelerating deep convolutional networks using sparsity arxiv preprint darrell williamson dynamically scaled fixed point arithmetic communications computers signal processing ieee pacific rim conference pages ieee aojun zhou anbang yao yiwen guo lin yurong chen incremental network quantization towards lossless cnns weights poster international conference learning representations shuchang zhou yuxin zekun xinyu zhou wen yuheng zou training low bitwidth convolutional neural networks low bitwidth gradients arxiv preprint chenzhuo zhu song han huizi mao william dally trained ternary quantization arxiv preprint appendix proof lemma let denote number elements let pdf exponential distribution parameter cdf furthermore xdx maxima therefore
9
may norms commutator subgroup infinite braid group mitsuaki kimura abstract paper give proof result brandenbursky says commutator subgroup infinite braid group admits stably unbounded norms moreover observe norms constructed equivalent biinvariant word norm studied brandenbursky introduction burago ivanov polterovich introduced notion conjugationinvariant norms asked several problems one follows problem exists perfect group satisfies following conditions commutator length stably bounded admits stably unbounded norm definitions norms see known groups exist brandenbursky proved following theorem theorem commutator subgroup infinite braid group admits stably unbounded norm kawasaki also showed commutator subgroup sympc group sympc group symplectomorphisms compact support isotopic identity standard symplectic space paper give proof using idea kawasaki kawasaki introduced relative quasimorphisms length proved existence implies stably unboundedness see proposition give proof construct stably unbounded norms observing signature braids quasimorphism study property norms prove following theorem theorem integer real numbers norm equivalent biinvariant word norm whose stably unboundedness observed brandenbursky acknowledgement author would like thank professor takashi tsuboi guidance helpful advice also thanks morimichi kawasaki useful advice mitsuaki kimura proofs main results construction stably unbounded norms first explain idea kawasaki definitions length following definition let group norm define subgroup subgroup generated elements define length min note norm definition let group norm function called quasimorphism relative exists constant every min concept appeared earlier paper entov polterovich theorem called controlled kawasaki proved following proposition proposition proposition let exist element stably unbounded next give useful sufficient condition prove function quasimorphism norm group normally generated subset defined min klgl denotes conjugation lemma lemma let group normally generated subset function exists constant inequality holds finally construct stably unbounded norms proving signature braids qbn denote group let standard artin generators braid denote closure braid defined signature link consider standard inclusion adding trivial string obtain following sequence define infinite braid group since signature braids inclusion compatible theorem signature braids norms proof lemma sufficient prove assumption implies existence braid let natural number since signature braids taking saddle moves times note obtain link link unknot components since trivial strings one figure known signature changes one saddle move see example since signature changed taking connected sum unknot signature changes times saddle moves hence figure saddle move figure prove theorem lemma let group normally generated conjugationinvariant norm assume bounded finite mitsuaki kimura proof following equalities hold using represent element product commutators form therefore since normally generated following corollary corollary proof theorem apply proposition signature since theorem sufficient see stabilization already known see example therefore stably unbounded norm corollary extremal property study properties norms two norms group called equivalent ratio bounded away first show norms equivalent follows fact norms extremal property lemma assume commute written products conjugates proof assumption thus call norm extremal property satisfies following condition norm satisfies exists positive number using lemma observe norms property proposition extremal property proof first prove extremal property let satisfy exists sufficiently large commute commutator argyle braid appeared lemma written products conjugates thus since conjugate obtain since proposition follows since follows proposition norms equivalent next also consider property biinvariant word norm proposition biinvariant word norm extremal property norms proof let written follows since thus written product commutators form since commute sufficient large follows corollary proposition follows norm also equivalent norms constructed theorem references brandenbursky concordance group stable commutator length braid groups algebr geom topol appear burago ivanov polterovich norms groups geometric origin adv stud pure math entov polterovich symplectic intersections comment math kawasaki relative quasimorphisms stably umbounded norms group symplectomorphisms euclidean spaces symplectic geom appear murasugi certain numerical invariant link types trans amer math soc graduate school mathematical sciences university tokyo komaba tokyo japan address mkimura
4
initialization multilayer forecasting artificial neural networks bochkarev maslennikova kazan volga region federal university vbochkarev yuliamsl abstract paper new method developed initialising artificial neural networks predicting dynamics time series initial weighting coefficients determined neurons analogously case linear prediction filter moreover improve accuracy initialization method multilayer neural network variants decomposition transformation matrix corresponding linear prediction filter suggested efficiency proposed neural network prediction method forecasting solutions lorentz chaotic system shown paper keywords artificial neural networks forecasting neural network initialization linear prediction filter introduction reliable forecasts necessary nowadays scientific research daily operations present problem usually solved linear prediction method latter reduces solution forecasting problem search coefficients linear prediction filter pth order gives best prediction current value sequence previous values filter order letter denotes coefficients filter rule one minimizes sum squared prediction errors considered function filter coefficients since graph function paraboloid minimum point unique linear prediction method efficient stationary systems systems whose properties change course time distribution parameter fluctuations close gaussian distribution complicated cases one better use nonlinear methods artificial neural networks however method also drawbacks thus case neural networks owing nonlinearity neurons activation function error functional complicated calculated local minimum point necessary global result spite great potential neural networks error neural network prediction model may exceed linear one mentioned using neural networks choice rule initialization neuron weighting coefficients plays important role algorithms initialization neural networks imply choice initial values weighting coefficients choice performed deterministic technique random one widely used algorithm latter case rule network trained several times order increase probability finding global minimum error function leads multiple growth computational burden obtained result may appear unsatisfactory one proposes interesting algorithm constructing hybrid neural network time series prediction first one trains network consisting single linear neuron obtained coefficients used initializing hybrid network also contains one neuron one sequentially adds neurons network decrease prediction error stops paper consider another approach also based preliminary solution optimal linear prediction problem formula demonstrates linear prediction filter representable linear neural direct transmission network analogy allows use preliminarily calculated coefficients linear prediction model base including corresponding elements neural network idea construct neural network training transforms data way expression order make full use abilities neural network algorithms approximating complex dependencies network consist layers meet difficulty connected ambiguity decomposition initial linear transform several sequential ones correspond layers neural network follows consider two possible ways overcome difficulty simplest algorithm initialization neural network based coefficients linear prediction model consider simplest technique solving stated problem case functions implemented various network layers evidently separated error occurs due nonlinearity neurons activation functions prior network training small fact imposes certain restrictions choice activation functions layers example hyperbolic tangent sigmoid domain weak nonlinearity origin coordinates using neurons sigmoidal activation functions properly scaling training set make transforms signal network neurons close linear let consider example application proposed initialization algorithm neural direct transmission network assume input values neurons correspond linear parts activation function represent resulting transformation matrix neural network follows transformation matrices layers respectively fig input layer network scales data make values series located within linear domain hyperbolic tangent fig transformation matrix neural network initialization intermediate layer performs weakly nonlinear transform help weight matrix contains coefficients linear prediction model output layer performs converse data transform proper choice coefficient one make error stipulated nonlinearity activation function arbitrarily small case neural network weights chosen way estimate current value input signal preceding sequential ones accordance expression note results prediction given neural network training nearly coincide output linear prediction filter training network help algorithm monotonic decrease error gradient algorithms based conjugate gradients method etc result obtained training step even better obtained help linear prediction filter paper use algorithm belongs class quasinewton methods guarantee high convergence rate prediction lorenz chaotic system proposed technique neural network initialization applied prediction trajectories lorenz system simplest example determinate chaotic system first observed lorenz numerical experiments studying trajectory system three connected quadratic ordinary differential equations define three modes equations convection liquid layer heated mentioned equations take form according results numerical modeling solutions system many values parameters asymptotically tend unstable cycles two evident clusterization centers thus form strange attractor fig illustrates solution system another important property solutions lorenz system essential dependence initial condition constitutes main feature chaotic dynamics therefore behavior system poorly predictable fig trajectories lorenz attractor left variation coordinate right use lorenz system test problem due fact rather adequate model many real systems one mentions system approximately describes oscillations parameters turbulent flows variations parameters geomagnetic field one also encounters system equations models economic processes testing proposed initialization algorithm constructed series correspond solutions system applied method automatic choice step value thinned obtained series order get sample acceptable size covering long time interval time step sec moreover removed initial part sample order make data used testing correspond movement near attractor make results independent choice initial point represent results prediction behavior given system help neural network initial values weights chosen accordance coefficients linear prediction filter first calculated coefficients linear prediction model base constructed neural network way described predicted current value sequential previous ones first second layer neural network contained neurons trained neural network help error backpropagation method minimizing error functional algorithm within epochs training prediction performed time interval varying second fig part predicted series increase coordinate seconds interior rectangle shows zoomed part series initial series solid line linear prediction dashed line neural network prediction dotted line fig shows part predicted series comparison also depict linear prediction series example typical whole range prediction interval namely proposed initialization algorithm provides essential improvement comparison linear prediction method neural network prediction method used initialization algorithm see table mean square prediction errors various time intervals comparison results obtained methods results numerical experiments prove efficiency proposed neural network initialization method also ascertain considered problem neural network prediction much efficient linear prediction method improved network initialization method main drawback considered simplest algorithm consists fact layers network least initial training stage participate extent data transformation order fully use abilities multilayer networks desirable make computational load layers uniform mentioned described method initialization neural network unique according considered example threelayer neural network one include real orthogonal matrices resulting transformation matrix see expression fig therefore training neural network initial values weighting coefficients layers given matrices respectively still completely corresponds initial linear prediction filter however training process nonlinearity neurons transfer function expect improvement prediction accuracy order choose optimal matrices convenient represent exponential form known case order make matrices orthogonal matrices skewsymmetric git let number neurons layers equal linear space skewsymmetric dimension space equals choose basis example example number neurons layers small one determine unknowns expression gradient descent method appropriate choice case define objective function error neural network initialized mentioned way subject full course training however clear approach requires calculations case calculation gradient coefficients enter requires network training courses one solve problem either using stochastic search algorithms random initialization coefficients case instead exponential parameterization orthogonal matrices one perform parameterization requires less computations search global minimum case requires multiple training neural network another approach describe based following ideas let choose matrices make nonlinear transformation data neural network bring gain soon possible rate neural network training depends gradient calculated differentiating error function respect neurons weights vary coefficients maximize norm gradient note process quite analogous training neural network error backpropagation method essence impose linear constraints variation weighting coefficients accordance expressions view fact expect total time consumption approximately two times greater simplest initialization algorithm described implementation algorithm needs minimal additions available libraries neural network computations example network error function first epoch training used objective function cases one solve optimization problem example gradient descent method performed numerical experiments proposed algorithm model data structure neural network previous case simplicity programming used error neural network first epoch training objective function performed iterations gradient descent method network calculated optimal values parameters initial weights performed epochs training fixed parameters total time consumption increased less times comparison simplest initialization algorithm considered see table values mean square prediction errors obtained application methods linear prediction method neural network prediction method nguyenwidrow initialization algorithm simplest algorithm improved one based use coefficients linear prediction model conditions prediction intervals second evidently use improved initialization algorithm cases results essential decrease prediction error table prediction method linear prediction method neural network prediction method neural network prediction method based linear one improved neural network prediction method sec sec sec sec comparison also determined values coefficients minimizing error output fully trained network case error prediction sec equals value obtained using greatest gradient criterion evidently switching method requires much computations insignificantly improve prediction accuracy conclusion according predicted behavior lorenz chaotic system proposed initialization method neural network allows one essentially decrease prediction error comparison linear prediction method neural network one use universal initialization algorithms also true problem conditions neural network prediction method much efficient linear one improved initialization algorithm allows additionally improve prediction accuracy slight increase training time therefore developed method proved consistent predicting behavior trajectories lorenz chaotic system allows one use predicting behavior complex series references statistical analysis time series linear stochastic systems constant coefficients statistical approach york galushkin neural networks theory springer kruglov dli golunov moscow fizmalit publisher russian medvedev potemkin neural networks matlab moscow russian solar activity forecast spectral analysis neurofuzzy prediction journal atmospheric physics practical optimization springer richard crownover introduction fractals chaos jones bartlett frick turbulence approaches models edition izhevsk rdc regular chaotic dynamics russian puu nonlinear economic dynamics springer
9
feb manipulating measuring model interpretability forough daniel goldstein jake hofman dgg jmh university colorado boulder microsoft research microsoft research jennifer wortman vaughan hanna wallach jenn wallach microsoft research microsoft research abstract despite growing body research focused creating interpretable machine learning methods empirical studies verifying whether interpretable methods achieve intended effects end users present framework assessing effects model interpretability users via experiments participants shown functionally identical models vary factors thought influence interpretability using framework ran sequence randomized experiments varying two putative drivers interpretability number features model transparency clear measured factors impact trust model predictions ability simulate model ability detect model mistakes found participants shown clear model small number features better able simulate model predictions however found difference multiple measures trust found clear models improve ability correct mistakes findings suggest interpretability research could benefit emphasis empirically verifying interpretable models achieve intended effects introduction machine learning increasingly used make decisions affect people lives critical domains like criminal justice credit lending medicine machine learning models often evaluated based predictive performance data sets measured example terms accuracy precision recall however good performance data may sufficient convince decision makers model trustworthy reliable wild address problem new line research emerged focuses developing interpretable machine learning methods two common approaches first employ models impact feature model prediction easy understand examples include generative additive models lou caruana point systems jung ustun rudin second provide explanations potentially complex models one thread research direction looks explain individual predictions learning simple local approximations model around particular data points ribeiro lundberg lee estimating influence training examples koh liang another focuses visualizing model output wattenberg despite flurry activity innovation area still consensus define quantify measure interpretability machine learning model kim indeed different notions interpretability simulatability trustworthiness simplicity often conflated lipton problem exacerbated fact different types users machine learning systems users may different needs different scenarios example approach works best regulator wants understand particular person denied loan may different approach works best data scientist trying debug machine learning model take perspective difficulty defining interpretability stems fact interpretability something directly manipulated measured rather interpretability latent property influenced different manipulable factors number features complexity model transparency model even user interface impacts different measurable outcomes end user ability simulate trust debug model different factors may influence outcomes different ways argue understand interpretability necessary directly manipulate measure influence different factors real people abilities complete tasks endeavor goes beyond realm typical machine learning research factors influence interpretability properties system design outcomes would ultimately like measure properties human behavior building interpretable machine learning models purely computational problem words interpretable defined people algorithms therefore take interdisciplinary approach building decades psychology social science research human trust models dietvorst logg general approach used literature run randomized experiments order isolate measure influence different manipulable factors trust goal apply approach order understand relationships properties system design properties human behavior present sequence randomized experiments varied factors thought make models less interpretable glass lipton measured changes impacted people decision making focus two factors often assumed influence interpretability rarely studied formally number features model transparency whether model internals clear black box focus laypeople opposed domain experts ask factors help simulate model predictions gain trust model understand model make mistakes others used experiments validate evaluate particular machine learning innovations context interpretability ribeiro lim attempt isolate measure influence different factors systematic way taking experimental approach experiments participants asked predict prices apartments single neighborhood new york city help machine learning model apartment represented terms eight features number bedrooms number bathrooms square footage total rooms days market maintenance fee distance subway distance school participants saw set apartments feature values crucially model prediction apartment came linear regression model varied experimental conditions presentation model result observed differences participants behavior conditions could attributed entirely model presentation first experiment section hypothesized participants shown clear model small number features would better able simulate model predictions likely trust thus follow model predictions also hypothesized participants different conditions would exhibit varying abilities correct model inaccurate predictions unusual examples predicted found participants shown clear model small number features better able simulate model predictions however find likely trust model predictions instead found difference trust conditions also found participants shown clear model less able correct inaccurate predictions second experiment section scaled apartment prices maintenance fees match median housing prices order determine whether findings first experiment merely artifact new york city high prices even prices fees findings first experiment replicated third experiment section dug deeper finding difference trust conditions make sure finding simply due measures trust instead used weight advice measure frequently used literature yaniv gino moore subsequently used context algorithmic predictions logg hypothesized participants would give greater weight predictions clear model small number features predictions model large number features update predictions accordingly also hypothesized participants behavior might differ told predictions made human expert instead model large number features even weight advice measure found difference trust conditions also found difference participants behavior told predictions made human expert view experiments first step toward larger agenda aimed quantifying measuring impact different manipulable factors influence interpretability experiment predicting apartment prices first experiment designed measure influence number features model transparency three properties human behavior commonly associated interpretability laypeople abilities simulate model predictions gain trust model understand model make mistakes running experiment posited three simulation clear model small number features easiest participants simulate trust participants likely trust thus follow predictions clear model small number features predictions model large number features detection mistakes participants different conditions exhibit varying abilities correct model inaccurate predictions unusual examples unusual examples intentionally hypotheses conditions would make participants less able correct inaccurate predictions one hand participant understands model links documents experiment omitted preserve author anonymity clear condition clear condition clear condition clear condition figure four primary experimental conditions conditions top model used two features bottom used eight conditions left participants saw model internals right presented model black box better may better equipped correct examples model makes mistakes hand participant may place greater trust model understands well leading closely follow predictions prediction error finally intent analyze participants prediction error condition intentionally directional hypotheses experimental design explained previous section asked participants predict apartment prices help machine learning model showed participants set apartments model prediction apartment varied experimental conditions presentation model considered four primary figure part testing phase first experiment participants asked guess model prediction state confidence step participants asked make prediction state confidence step experimental conditions design participants saw model uses two features number bathrooms square two predictive features saw model uses eight features note eight feature values visible participants conditions participants saw model internals linear regression model visible coefficients presented model black box screenshots four primary experimental conditions shown figure additionally considered baseline condition model available ran experiment amazon mechanical turk using psiturk gureckis platform designing online experiments experiment recruited participants located mechanical turk approval ratings greater participants randomly assigned five conditions clear participants clear model participant received flat payment participants first shown detailed instructions including clear conditions simple english description corresponding linear regression model proceeding experiment two phases training phase participants shown ten apartments random order four primary experimental conditions participants shown model prediction apartment price asked make prediction shown apartment actual price baseline condition participants asked predict price apartment shown actual price testing phase participants shown another twelve apartments order first ten randomized remaining two always appeared last reasons described four primary experimental conditions participants asked guess model would predict apartment simulate model indicate confident guess scale figure shown model prediction asked indicate confident model correct finally asked make prediction apartment price indicate confident prediction figure baseline condition participants asked predict price apartment indicate confidence apartments shown participants selected data set actual upper west side apartments taken popular reliable new york city real estate website create models four primary experimental conditions first trained linear regression model data set using ordinary least squares python library pedregosa rounding coefficients nice numbers within safe keep models similar possible fixed coefficients number bathrooms square footage intercept model match model trained linear regression model remaining six features following rounding procedure obtain nice numbers resulting coefficients shown figure presenting model predictions participants rounded predictions nearest enable comparisons across experimental conditions ten apartments used training phase first ten apartments used testing phase selected apartments data set rounded predictions models agreed chosen cover wide range deviations models predictions apartments actual prices selecting apartments models agreed able ensure varied experimental conditions presentation model result observed differences participants behavior conditions could attributed entirely model presentation last two apartments used testing phase chosen test third participants different conditions exhibit varying abilities correct model inaccurate predictions unusual examples test hypothesis would ideally used apartment strange misleading features caused models make bad prediction unfortunately apartment data set chose two examples test different aspects hypothesis examples exploited models large coefficient number bathrooms first apartment apartment data set models made high different predictions comparisons feature conditions therefore impossible could examine differences accuracy clear conditions second apartment synthetically generated apartment models made high prediction allowing comparisons conditions ruling accuracy comparisons since ground truth apartments always shown last avoid previously studied phenomenon people trust model less seeing make mistake dietvorst results run experiment compared participants behavior across conditions required compare multiple responses multiple participants complicated possible correlations among given participant responses example people might consistently overestimate apartment prices regardless condition assigned others might consistently provide underestimates addressed fitting model measure interest capture differences across conditions controlling https particular coefficient found value divisible largest possible exponent ten safe range coefficient value plus minus experiment deviation experiment prediction error mean prediction error mean deviation model mean simulation error experiment simulation error figure results first experiment mean simulation error mean deviation participants predictions model prediction smaller value indicates higher trust mean prediction error error bars indicate one standard error standard approach analyzing repeated measures experimental designs bates derived plots statistical tests models plots show averages one standard error condition fitted models statistical tests report degrees freedom test statistics unless otherwise noted plots statistical tests correspond first ten apartments testing phase simulation defined participant simulation error absolute deviation model prediction participant guess prediction figure shows mean simulation error testing phase hypothesized participants clear condition lower simulation error average participants conditions contrast clear three primary conditions suggests average participants condition understanding model works participants clear condition appeared higher simulation error average participants condition could see model internals contrast clear though note comparison one could due chance trust measure trust calculated absolute deviation model prediction participant prediction apartment price smaller value indicates higher trust figure shows contrary second hypothesis found significant difference participants deviation model clear find statistically practically significant differences participants confidence models predictions detection mistakes used last two apartments testing phase apartment apartment test third hypothesis models made erroneously high predictions examples apartments found participants four primary experimental conditions overestimated apartments prices compared participants baseline condition suspect due anchoring effect around models predictions apartment found significant difference participants deviation model prediction four primary conditions see figure apartment found significant difference clear conditions contrast clear clear particular participants clear conditions deviated model prediction less average participants conditions resulting even worse final predictions apartment price see figure finding follow standard notation result degrees freedom reported value test statistic corresponding mean deviation model apartment mean deviation model apartment experiment deviation apartment experiment deviation apartment experiment deviation apartment mean deviation model apartment mean deviation model apartment experiment deviation apartment figure mean deviation model apartments first experiment top second experiment bottom error bars indicate one standard error note apartment comparisons twoand conditions possible models make different predictions contradicts common intuition transparency enables users understand model make mistakes prediction error defined prediction error absolute deviation apartment actual price participant prediction apartment price figure shows find significant difference four primary experimental conditions however participants baseline condition significantly higher error participants four primary conditions contrast baseline four primary conditions experiment prices one potential explanation participants poor abilities correct inaccurate predictions might lack familiarity new york city unusually high apartment prices example participant finds upper west side prices unreasonably high even model correct may notice model placed much weight number bathrooms second experiment designed address issue replicating first experiment apartment prices maintenance fees scaled match median housing prices running experiment three hypotheses first two hypotheses identical first experiment made third hypothesis precise reflect results first experiment small pilot prices experiment deviation experiment prediction error mean prediction error mean deviation model mean simulation error experiment simulation error figure results second experiment mean simulation error mean deviation participants predictions model predictions smaller value indicates higher trust mean prediction error error bars indicate one standard error detection mistakes participants less likely correct inaccurate predictions unusual examples clear model compared model experimental design first scaled apartment prices maintenance fees first experiment factor ten account change also scaled regression coefficients except coefficient maintenance fee factor ten apart description neighborhood apartments selected experimental design unchanged ran experiment amazon mechanical turk excluded people participated first experiment recruited new participants satisfied selection criteria first experiment participants randomly assigned five conditions clear clear model participant received flat payment results simulation hypothesized shown figure participants clear condition significantly lower simulation error average participants conditions contrast clear three primary conditions line finding first experiment trust contrary second hypothesis line finding first experiment found significant difference participants trust indicated deviation model clear detection mistakes line finding first experiment found significant difference participants deviation model prediction four primary conditions apartment see figure hypothesized line finding first experiment participants clear conditions deviated model prediction less average participants conditions apartment resulting even worse final predictions apartment price see figure findings suggest new york city unusually high apartment prices explain participants poor abilities correct inaccurate predictions prediction error participants clear condition statistically practically significantly lower prediction error contrast clear three primary conditions experiment alternative measure trust first two experiments found participants likely trust predictions clear model small number features predictions model large number features indicated deviation predictions model prediction however perhaps another measure trust would reveal differences conditions section therefore present third experiment designed allow compare participants trust across conditions using alternative measure trust weight advice measure frequently used literature yaniv gino moore logg weight advice quantifies degree people update beliefs predictions made seeing model predictions toward advice given model predictions context experiment defined model prediction participant initial prediction apartment price seeing participant final prediction apartment price seeing equal participant final prediction matches model prediction equal participant averages initial prediction model prediction understand benefits comparing weight advice across conditions consider scenario close different reasons might happen one hand could case far participant made significant update initial prediction based model hand could case already close participant update prediction two scenarios indistinguishable terms participant deviation model prediction contrast weight advice would high first case low second additionally used experiment chance see whether participants behavior would differ told predictions made human expert instead model previous studies examined question different perspectives differing results dietvorst closely related experiment logg found people presented predictions either algorithm human expert updated predictions toward predictions algorithm toward predictions human expert variety domains interested see whether finding would replicate four hypotheses trust deviation participants predictions deviate less predictions clear model small number features predictions model large number features trust weight advice weight advice higher participants see clear model small number features see model large number features humans machines participants trust human expert model differing extents result deviation model predictions weight advice also differ detection mistakes participants different conditions exhibit varying abilities correct model inaccurate predictions unusual examples first two hypotheses variations first experiment last hypothesis identical experimental design considered four primary experimental conditions first two experiments plus new condition expert participants saw information model labeled human expert instead include baseline condition natural baseline would simply ask participants predict apartment prices first step testing phase described ran experiment amazon mechanical turk excluded people participated first two experiments recruited new participants satisfied selection criteria first two experiments participants randomly assigned five conditions clear clear expert participant received flat payment excluded data one participant reported technical difficulties asked participants predict apartment prices set apartments used first two experiments however order calculate weight advice modified experiment design participants asked two predictions apartment testing phase initial prediction shown model prediction final prediction shown model prediction ensure participants initial predictions across conditions asked initial predictions twelve apartments introducing model human expert informing would able update predictions design added benefit potentially reducing amount anchoring model expert predictions participants first shown detailed instructions intentionally include information corresponding model human expert proceeding experiment two phases short training phase participants shown three apartments asked predict apartment price shown apartment actual price testing phase consisted two steps first step participants shown another twelve apartments order twelve apartments randomized participants asked predict price apartment second step participants introduced model human expert revisiting twelve apartments first two experiments order first ten apartments randomized remaining two apartments always appeared last apartment participants first reminded initial prediction shown model expert prediction asked make final prediction apartment expert experiment weight advice experiment prediction error mean prediction error mean weight advice woa mean deviation model experiment deviation expert expert figure results third experiment mean deviation participants predictions model prediction smaller value indicates higher trust mean weight advice mean prediction error error bars indicate one standard error results trust deviation contrary first hypothesis line findings first two experiments found significant difference participants deviation model clear see figure trust weight advice weight advice well defined participant initial prediction matches model prediction condition therefore calculated mean weight advice pairs participant initial prediction match model calculation viewed calculating mean conditioned initial disagreement participant model contrary second hypothesis line findings measures trust first two experiments find significant difference participants weight advice clear conditions see figure humans machines contrary third hypothesis find significant difference participants trust indicated either deviation predictions model expert prediction weight advice expert conditions detection mistakes contrast first two experiments find participants clear conditions less able correct inaccurate predictions initially considered alternative design participants asked predict apartment price shown model prediction asked update prediction moving next apartment pilots appeared participants changed initial predictions response model verify ran larger version experiment hypothesizing participants initial predictions would deviate less model predictions clear condition predicted indeed case amount participants initial predictions change based model see could viewed another measure trust found significant difference fraction times participants initial predictions matched model predictions discussion future work investigated two factors thought influence model number features model laypeople abilities simulate model predictions gain trust model understand model make mistakes although found clear model small number features easier participants simulate found difference trust also found participants less able correct inaccurate predictions shown clear model instead black box findings suggest one take granted simple transparent model always leads higher trust however caution readers jumping conclusion interpretable models valuable experiments focused one model presented one specific subpopulation subset scenarios interpretability might play important role instead see work first many steps towards larger goal rigorously quantifying measuring interpretability matters general experimental approach presenting people models make identical predictions varying presentation model order isolate measure impact different factors people abilities perform applied wide range different contexts may lead different conclusions example instead linear regression model one could examine decision trees rule lists classification setting experiments could repeated participants domain experts data scientists researchers lieu laypeople recruited amazon mechanical turk likewise many scenarios explored debugging poorly performing model assessing bias model predictions explaining individual prediction made hope work serve useful template examining importance interpretability contexts references douglas bates martin ben bolker steve walker fitting linear models using journal statistical software rich caruana yin lou johannes gehrke paul koch marc sturm noemie elhadad intelligible models healthcare predicting pneumonia risk hospital readmission proceedings acm sigkdd international conference knowledge discovery data mining kdd berkeley dietvorst joseph simmons cade massey algorithm aversion people erroneously avoid algorithms seeing err journal experimental psychology general finale kim towards rigorous science interpretable machine learning arxiv preprint francesca gino moore effects task difficulty use advice journal behavioral decision making alyssa glass deborah mcguinness michael wolverton toward establishing trust adaptive agents proceedings international conference intelligent user interfaces iui todd gureckis jay martin john mcdonnell alexander rich doug markant anna coenen david halpern jessica hamrick patricia chan psiturk framework conducting replicable behavioral experiments online behavior research methods jongbin jung connor concannon ravi shro sharad goel daniel goldstein simple rules complex decisions arxiv preprint pang wei koh percy liang understanding predictions via influence functions proceedings international conference machine learning icml brian lim anind dey daniel avrahami explanations improve intelligibility intelligent systems proceedings sigchi conference human factors computing systems chi zachary lipton mythos model interpretability arxiv preprint jennifer logg theory machine people rely algorithms harvard business school nom unit working paper yin lou rich caruana johannes gehrke intelligible models classification regression proceedings acm sigkdd international conference knowledge discovery data mining kdd yin lou rich caruana johannes gehrke giles hooker accurate intelligible models pairwise interactions proceedings acm sigkdd international conference knowledge discovery data mining kdd scott lundberg lee unified approach interpreting model predictions advances neural information processing systems nips dilek paul goodwin mary thomson sinan relative influence advice human experts statistical methods forecast adjustments journal behavioral decision making fabian pedregosa varoquaux alexandre gramfort vincent michel bertrand thirion olivier grisel mathieu blondel peter prettenhofer ron weiss vincent dubourg jake vanderplas alexandre passos david cournapeau matthieu brucher matthieu perrot duchesnay machine learning python journal machine learning research marco tulio ribeiro sameer singh carlos guestrin trust explaining predictions classifier proceedings acm sigkdd international conference knowledge discovery data mining kdd berk ustun cynthia rudin supersparse linear integer models optimized medical scoring systems machine learning journal martin wattenberg fernanda moritz hardt attacking discrimination smarter machine learning accessed https ilan yaniv receiving people advice influence benefit organizational behavior human decision processes
2
recent advances recognition oct yanwei tao xiang jiang xiangyang xue leonid sigal shaogang gong recent renaissance deep convolution neural networks encouraging breakthroughs achieved supervised recognition tasks class sufficient training data fully annotated training data however scale recognition large number classes training samples class remains unsolved problem one approach scaling recognition develop models capable recognizing unseen categories without training instances learning article provides comprehensive review existing recognition techniques covering various aspects ranging representations models datasets evaluation settings also overview related recognition tasks including open set recognition used natural extensions zeroshot recognition limited number class samples become available recognition implemented setting importantly highlight limitations existing approaches point future research directions existing new research area index learning recognition oneshot learning recognition ntroduction humans distinguish least basic object categories many subordinate ones breeds dogs also create new categories dynamically examples purely based description contrast existing computer vision techniques require hundreds thousands labelled samples object class order learn recognition model inspired humans ability recognize without seeing examples research area learning learn lifelong learning received increasing interests studies aim intelligently apply previously learned knowledge help future recognition tasks particular major topic research area building recognition models capable recognizing novel visual categories associated labelled training samples learning training examples learning recognizing visual categories setting testing instance could belong either seen categories problems solved setting transfer learning typically transfer learning emphasizes transfer yanwei xiangyang xue school data science fudan university shanghai china yanweifu xyxue jiang school computer science shanghai key lab intelligent information processing fudan university email ygj jiang corresponding author leonid sigal department computer science university british columbia canada email lsigal tao xiang shaogang gong school electronic engineering computer science queen mary university london email knowledge across domains tasks distributions similar transfer learning refers problem applying knowledge learned one auxiliary develop effective model target recognize categories target domain one utilize information learned source domain unfortunately may difficult existing methods domain adaptation directly applied tasks since training instances available target domain thus key challenge learn generalizable feature representation recognition models usable target domain rest paper organized follows give overview recognition sec semantic representations common models recognition reviewed sec iii sec respectively next discuss recognition tasks beyond recognition sec including generalized recognition openset recognition recognition commonly used datasets discussed sec also discuss problems using datasets conduct recognition finally suggest future research directions sec vii conclude paper sec viii overview ero shot ecognition recognition used variety research areas neural decoding fmri images face verification object recognition video understanding natural language processing tasks identifying classes without observed data called learning specifically settings recognition recognition model leverage training data identify unseen thus main challenge recognition generalize recognition models identify novel object categories without accessing labelled instances categories key idea underpinning recognition explore exploit knowledge unseen class target domain semantically related seen classes source domain explore relationship seen unseen classes sec iii use intermediatelevel semantic representations semantic representation typically encoded high dimensional vector space common semantic representations include semantic attributes sec semantic word vectors sec encoding linguistic context semantic representation assumed shared dataset given semantic representation class name represented attribute vector semantic word vector representation termed class prototype iii emantic epresentations ero shot ecognition semantic representations universal shared exploited knowledge transfer source target datasets sec order enable recognition novel unseen classes projection function mapping visual features semantic representations typically learned auxiliary data using embedding model sec unlabelled target class represented embedding space using class prototype projected target instance classified using recognition model measuring similarity projection class prototypes embedding space sec additionally open set setting test instances could belong either source target categories instances target sets also taken outliers source data therefore novelty detection needs employed first determine whether testing instance manifold source categories classified one target categories section review semantic representations used recognition representations categorized two categories namely semantic attributes beyond briefly review relevant papers table recognition considered type learning example reading description flightless birds living almost exclusively antarctica know recognize referring penguin even though people never seen penguin life cognitive science studies explain humans able learn new concepts extracting intermediate semantic representation descriptions flightless bird living antarctica transferring knowledge known sources bird classes swan canary cockatoo unknown target penguin reason humans able understand new concepts recognition training samples recognition ability termed learning learn interestingly humans recognize newly created categories examples merely based description able easily recognize video event named germany world cup winner celebrations definition exist july teach machines recognize numerous visual concepts dynamically created combining multitude existing concepts one would require exponential set training instances supervised learning approach supervised approach would struggle novel concepts germany world cup winner celebrations positive video samples would available july germany finally beat argentina win cup therefore recognition crucial recognizing dynamically created novel concepts composed new combinations existing concepts learning possible construct classifier germany world cup winner celebrations transferring knowledge related visual concepts ample training samples bayern munich champions europe spain world cup winner celebrations semantic attributes attribute wings refers intrinsic characteristic possessed instance class bird indicates properties spotted annotations head image object lampert attributes describe class instance contrast typical classification names instance farhadi learned richer set attributes including parts shape materials etc another commonly used methodology human action recognition liu attribute modeling wang take attribute labels latent variables training dataset form structured latent svm model objective minimize prediction loss attribute description instance category useful semantically meaningful intermediate representation bridging gap low level features high level class concepts palatucci attribute learning approaches emerged promising paradigm bridging semantic gap addressing data sparsity transferring attribute knowledge image video understanding tasks key advantage attribute learning provide intuitive mechanism learning salakhutdinov transfer learning hwang particularly attribute learning enables learning zero instances class via attribute sharing learning specifically challenge recognition recognize unseen visual object categories without training exemplars unseen class requires knowledge transfer semantic information auxiliary seen classes example images unseen target classes later works parikh kovashka berg extended attributes compound attributes makes extremely useful information retrieval allowing complex queries asian women short hair big eyes high cheekbones identification finding actor whose name forgot image misplaced large collection broader sense attribute taken one special type subjective visual property indicates task estimating continuous values representing visual properties observed properties also examples attributes including interestingness memorability aesthetic age estimation image interestingness studied gygli showed three cues contribute interestingness aesthetics general preferences last refers fact people general find certain types scenes interesting others example outdoornatural jiang evaluated different features video interestingness prediction crowdsourced pairwise comparisons acm international conference multimedia retrieval icmr published special issue multimodal understanding subjective properties applications multimedia analysis subjective property understanding detection retrieval subjective visual properties used intermediate representation recognition well visual recognition tasks people recognized description pale skin complexion chubby face looks next subsections briefly review different types attributes attributes attributes defined human experts concept ontology different tasks may also necessitate contain distinctive attributes facial clothes attributes attributes biological traits age gender product attributes size color price shape attributes attributes transcend specific learning tasks typically independently across different categories thus allowing transference knowledge essentially attributes either serve intermediate representations knowledge transfer learning directly employed advanced applications clothes recommendation ferrari studied elementary properties colour geometric pattern human annotations proposed generative model learning simple color texture attributes attribute either viewed unary red colour round texture binary stripes unary attributes simple attributes whose characteristic properties captured individual image segments appearance red shape round contrast binary attributes complex attributes whose basic element pair segments stripes relative attributes attributes discussed use single value represent strength attribute possessed one indicate properties spotted annotations images objects contrast relative information form relative attributes used informative way express richer semantic meaning thus better represent visual information relative attributes directly used recognition relative attributes parikh first proposed order learn ranking function capable predicting relative semantic strength given attribute annotators give pairwise comparisons images ranking function learned estimate relative attribute values unseen images ranking scores relative attributes learned form richer representation corresponding strength visual properties used number tasks including visual recognition sparse data interactive image http search kovashka shrivastava active learning biswas visual categories kovashka proposed novel model feedback image search users interactively adjust properties exemplar images using relative attributes order best match ideal queries extended relative attributes subjective visual properties proposed model pruning annotation crowdsourced pairwise comparisons given pairwise image comparisons singh developed deep convolutional network simultaneously localize rank relative visual attributes localization branch adapted spatial transformer network attributes attributes usually defined extra knowledge either expert users concept ontology better augment attributes parikh proposed novel approach actively augment vocabulary attributes help resolve confusions new attributes coordinate discriminativeness candidate attributes however attributes far enough model complex visual data definition process still either inefficient costing substantial effort user experts insufficient descriptive properties may discriminative tackle problems necessary automatically discover discriminative intermediate representations visual data attributes attributes used recognition tasks despite previous efforts exhaustive space attributes unlikely available due expense ontology creation simple fact semantically obvious attributes humans necessarily correspond space detectable discriminative attributes one method collecting labels large scale problems use amazon mechanical turk amt however even excellent quality assurance results collected still exhibit strong label noise thus serious issue learning either amt existing social subtly even exhaustive ontology subset concepts ontology likely sufficient annotated training examples portion ontology effectively usable learning may much smaller inspired works automatically mining attributes data attributes explored previous works liu employed information theoretic approach infer attributes training examples building framework based latent svm formulation directly extended attribute concepts images comparable action attributes order better recognize human actions attributes used represent human actions videos enable construction descriptive models human action recognition augmented attributes attributes better differentiate existing classes farhadi also learned attributes attribute works limited first learn different types attributes attributes relative attributes attributes video attributes concept ontology semantic word embedding papers table ifferent types semantic representations zero shot recognition attributes separately rather jointly framework therefore attributes may patterns exist attributes second attributes mined data know corresponding semantic attribute names discovered attributes reasons usually attributes directly used learning limitations inspired works addressed tasks understanding multimedia data sparse incomplete labels particularly studied videos social group activities proposing novel scalable probabilistic topic model learning attribute space learned attributes enable learning learning learning habibian proposed new type video representation learning videostory embedding videos corresponding descriptions representation also interpreted attributes work best paper award acm multimedia video attributes existing studies attributes focus object classification static images another line work instead investigates attributes defined videos video attributes important corresponding video related tasks action recognition activity understanding video attributes correspond wide range visual concepts objects animal scenes meeting snow actions blowing candle events wedding ceremony compared static image attributes many video attributes computed image sequences complex often involve multiple objects video attributes closely related video concept detection multimedia community video concepts video ontology taken video attributes recognition depending ontology models used many approaches video concept detection chang snoek hauptmann gan qin therefore seen addressing video attribute learning solve video event detection works aim automatically expand hauptmann tang enrich yang set video tags given search query case tagging space constrained fixed concept ontology may large complex example vocabulary space tags video event detection also attracted large research attention recently video event higher level semantic entity typically composed multiple attributes example birthday party event consists multiple concepts blowing candle birthday cake semantic correlation video concepts also utilized help predict video event interest weakly supervised concepts pairwise relationships concepts gan general video understanding object scene semantics attributes note full survey recent works video event detection beyond scope paper semantic representations beyond attributes besides attributes many types semantic representations semantic word vector concept ontology representations directly learned textual descriptions categories also investigated wikipedia articles sentence descriptions knowledge graphs concept ontology concept ontology directly used semantic representation alternative attributes example wordnet one widely studied concept ontologies semantic ontology built large lexical dataset english nouns verbs adjectives adverbs grouped sets cognitive synonyms synsets indicate distinct concepts idea semantic distance defined wordnet ontology also used rohrbach transferring semantic information learning problems thoroughly evaluated many alternatives semantic links auxiliary target classes exploring linguistic bases wordnet wikipedia yahoo web yahoo image flickr image additionally wordnet used many vision problems fergus leveraged wordnet ontology hierarchy define semantic distance two categories sharing labels classification costa model exploits visual concepts images knowledge transfer recognition semantic word vectors recently word vector approaches based distributed language representations gained popularity recognition semantic attribute space dimension space specific semantic meaning according either human experts concept ontology one dimension could correspond fur another four legs sec contrast semantic word vector space trained linguistic knowledge bases wikipedia umbcwebbase using natural language processing models result although relative positions different visual concepts semantic meaning cat would closer dog sofa dimension space specific semantic meaning language model used project class textual name space projections used prototypes learning socher learned neural network model embed image word vector semantic space obtained using unsupervised linguistic model trained wikipedia text images either known unknown classes could mapped word vectors classified finding closest prototypical linguistic word semantic space distributed semantic word vectors widely used recognition model cbow model trained large scale text corpora construct semantic word space different unsupervised linguistic model distributed word vector representations facilitate modeling syntactic semantic regularities language enable reasoning vector arithmetics example moscow much closer russia capital russia capital semantic space one possible explanation intuition underlying syntactic semantic regularities distributional hypothesis states word meaning captured words cooccur frome scaled ideas recognize datasets proposed deep visualsemantic embedding model map images rich semantic embedding space recognition showed reasoning could used synthesize different label combination prototypes semantic space thus crucial learning recent work using semantic word embedding includes interestingly vector arithmetics semantic emotion word vectors matching psychological theories emotion ekman six basic emotions plutchik emotion example sur prise sadness close disappointment joy trust close love since usually thousands words describe emotions emotion recognition also investigated odels ero shot ecognition help semantic representations recognition usually solved first learning embedding model sec recognition sec best knowledge general embedding formulation recognition first introduced larochelle embedded handwritten character typed representation helped recognize unseen classes embedding models aim establish connections seen classes unseen classes projecting lowlevel features close corresponding semantic vectors prototypes embedding learned known classes novel classes recognized based similarity prototype representations predicted representations instances embedding space recognition model matches projection image features unseen class prototypes embedding space addition discussing models recognition methods sec sec respectively also discuss potential problems encountered recognition models sec embedding models bayesian models embedding models learned using bayesian formulation enables easy integration prior knowledge type attribute compensate limited supervision novel classes image video understanding generative model first proposed ferrari zisserman learning simple color texture attributes lampert first study problem object recognition categories training examples available direct attribute prediction dap indirect attribute prediction iap first two models zeroshot recognition dap iap algorithms use single model first learns embedding using support vector machine svm recognition using bayesian formulation dap iap inspired later works employ generative models learn embedding including topic models random forests briefly describe dap iap models follows dap model assume relation known classes unseen classes descriptive attributes given matrix binary matrix encodes associations values attribute given class extra knowledge applied define association matrix instance leveraging human experts lampert consulting concept ontology semantic relatedness measured class attribute concepts rohrbach training stage attribute classifiers trained attribute annotations known classes test stage posterior probability inferred individual attribute image predict class label object class iap model dap model directly learns attribute classifiers known classes iap model builds attribute classifiers combining probabilities associated known classes also introduced direct model rohrbach training step learn probabilistic multiclass classifier estimate training classes estimated use way dap learning classification problems testing step predict semantic embedding semantic embedding learns mapping visual feature space semantic space various semantic representations discussed sec attributes introduced describe objects learned attributes may optimal recognition tasks end akata proposed idea label embedding takes image classification problem minimising compatibility function image label embedding work modified ranking objective function derived wsabie model attributes may suffer problems partial occlusions scale changes images proposed learning extracting attributes segments containing entire object joint learning simultaneous object classification segment proposal ranking attributes thus learned embedding empirical risk class label well segmentation quality semantic embedding algorithms also investigated learning framework latent svm learning embedding common spaces besides semantic embedding relationship visual semantic space learned jointly exploring exploiting common intermediate space extensive efforts made towards direction akata learned joint embedding semantic space attributes text hierarchical relationships employed text features predict output weights convolutional fully connected layers deep convolutional neural network cnn one dataset may exist many different types semantic representations type representation may contain complementary information fusing potentially improve recognition performance thus several recent works studied different methods embedding employed semantic class label graph fuse scores different semantic representations similarly label relation graphs also studied significantly improved object classification supervised recognition scenarios number successful approaches learning semantic embedding space reply canonical component analysis cca hardoon proposed general kernel cca method learning semantic embedding web images associated text embedding enables direct comparison text images many works focused modeling associated text tags cca often exploited provide unsupervised fusion different modalities gong also investigated problem modeling internet images associated text tags proposed cca embedding framework retrieval tasks additional view allows framework outperform number baselines retrieval tasks proposed embedding model jointly exploring functional relationships text image features transferring labels help annotate images label transfer generalized recognition deep embedding recent recognition models rely deep convolutional models extract image features one first works devise extended deep architecture learn visual semantic embedding identify visual objects using labeled image data well semantic information gleaned unannotated text conse constructed image embedding approach mapping images semantic embedding space via convex combination class label embedding vectors devise conse evaluated datasets imagenet ilsvrc imagenet dataset combine visual textual branches deep embedding different loss functions considered including losses euclidean distance loss least square loss zhang employed visual space embedding space proposed deep learning architecture recognition networks two branches visual encoding branch uses convolutional neural network encode input image feature vector semantic embedding branch encodes input semantic representation vector class corresponding image belonging recognition models embedding space embedding model learned testing instances projected embedding space recognition carried using different recognition models common used one nearest neighbour classifier classify testing instances assigning class label term nearest distances class prototypes projections testing instances embedding space proposed learning algorithm update class prototypes one step manifold information used recognition models embedding space proposed hypergraph structure embedding space recognition addressed label propagation unseen prototype instances unseen testing instances changpinyo synthesized classifiers embedding space recognition zeroshot learning recognition models consider different semantic labels latent svm structure also used recognition models wang treated object attributes latent variables learnt correlations attributes undirected graphical model hwang utilized kernelized feature learning framework learn sharing features objects zebra prototype hastail attribute different visual appearance pig prototype pig prototype visual space attribute space embedding space fig illustrating projection domain shift problem prototypes annotated red stars predicted semantic attribute projections shown blue pig zebra share hastail attribute yet different visual appearance tail figure comes attributes additionally long employed attributes synthesize unseen visual features training stage thus recognition solved conventional supervised classification models presence universal neighbors hubs space radovanovic first study hubness problem hypothesis made hubness inherent property data distributions high dimensional vector space nevertheless low challenged hypothesis showed evidence hubness rather boundary effect generally effect density gradient process data generation interestingly experiments showed hubness phenomenon also occur data causes hubness still investigation recent works noticed regression based learning methods suffer problem alleviate problem dinu utilized global distribution feature instances unseen data transductive manner contrast yutaro addressed problem inductive way embedding class prototypes visual feature space eyond ero shot ecognition problems recognition generalized recognition recognition two intrinsic problems recognition namely projection domain shift problem sec hubness problem sec projection domain shift problems projection domain shift problem recognition first identified problem explained follows since source target datasets different classes underlying data distribution classes may also differ projection functions learned source dataset visual space embedding space without adaptation target dataset cause unknown figure gives intuitive illustration problem plots attribute space representation spanned feature projections learned source data class prototypes binary attribute vectors zebra pig one auxiliary target classes respectively hastail semantic attribute means different visual appearance pig zebra attribute space directly using projection functions learned source datasets zebra target datasets pig lead large discrepancy class prototype target class predicted semantic attribute projections alleviate problem transductive learning based approaches proposed utilize manifold information instances unseen classes nevertheless transductive setting assumes testing data accessed obviously invalid new unseen classes appear dynamically unavailable learning models thus inductive learning base approaches also studied methods usually enforce additional constraints information training data hubness problem hubness problem another interesting phenomenon may observed recognition essentially hubness problem described conventional supervised learning tasks taken granted algorithms take form closed set testing classes known training time recognition contrast assumes source target classes mixed testing data coming unseen classes assumption course greatly unrealistically simplifies recognition tasks relax settings recognition investigate recognition tasks generic setting several tasks advocated beyond conventional recognition particular generalized recognition open set recognition tasks discussed recently generalized recognition proposed broke restricted nature conventional recognition also included training classes among testing data chao showed nontrivial ineffective directly extend current learning approaches solve generalized recognition generalized setting due practical nature recommended evaluation settings recognition tasks recognition contrast developed independently recognition initially open set recognition aimed breaking limitation closed set recognition setup specifically task open set recognition tries identify class name image large set classes includes limited training classes open set recognition roughly divided two subgroups conventional open set recognition first formulated conventional open set recognition identifies whether testing images come training classes unseen classes category methods explicitly predict unseen classes testing instance unseen classes belongs setting conventional open set recognition also known incremental learning generalized open set recognition key difference conventional open set recognition generalized open set recognition also needs explicitly predict semantic meaning class testing instances even unseen novel classes task first defined evaluated tasks object categorization generalized open set recognition taken general version recognition classifiers trained training instances limited training classes whilst learned classifiers required classify testing instances large set open vocabulary say class vocabulary conceptually similar vast variants generalized recognition tasks studied research community object retrieval person searching targets open vocabulary scene parsing recognition problem learning learning problem instead textual description new classes learning assumes one training samples class similar recognition recognition inspired fact humans able learn new object categories one examples existing learning approaches divided two groups direct supervised learning based approaches transfer learning based approaches direct supervised approaches early approaches assume exist set auxiliary classes related ample training samples whereby transferable knowledge extracted compensate lack training samples instead target classes used trained standard classifier using supervised learning simplest method employ nonparametric models knn restricted number training samples however without learning distance metric used knn often inaccurate overcome problem metric embedding learned used knn classification approaches attempt synthesize training samples augment small training dataset however without knowledge transfer classes performance direct supervised learning based approaches typically weak importantly models meet requirement lifelong learning new unseen classes added learned classifier still able recognize seen existing classes transfer recognition category approaches follow similar setting learning assume auxiliary set training data different classes exist explore paradigm learning learn aim transfer knowledge auxiliary dataset target dataset one examples per class approaches differ knowledge transferred knowledge represented specifically knowledge extracted shared form model prior generative model features semantic attributes contextual information many approaches take similar strategy existing learning approaches transfer knowledge via shared embedding space embedding space typically formulated using neural networks siamese network discriminative support vector regressors svr metric learning kernel embedding methods particularly one common embedding ways semantic embedding normally explored projecting visual features semantic entities common new space projections take various forms corresponding loss functions sje wsabie ale devise cca recently deep received increasing attention learning wang proposed idea adaptation automatically learning generic category agnostic transformation models learned samples models learned large enough sample sets framework proposed finn trains deep model auxiliary dataset objective learned model effectively new classes one gradient steps note similar generalised learning setting recently problem adding new classes deep neural network whilst keeping ability recognise old classes attempted however problem lifelong learning progressively adding new classes remains unsolved problem datasets ero shot ecognition section summarizes datasets used recognition recently increasing number proposed recognition algorithms xian compared analyzed significant number methods depth defined new benchmark unifying evaluation protocols data splits details datasets listed tab standard datasets animal attribute awa dataset awa consists animal category images collected online images least examples class seven different feature types provided rgb color histograms sift rgsift phog surf local histograms decaf awa dataset defines classes animals associated attributes furry claws consistent evaluation object classification methods awa dataset defined test classes chimpanzee giant panda hippopotamus humpback whale leopard pig raccoon rat seal images classes taken test data whereas images remaining classes used training since images awa available public license xian introduced another new learning dataset animals attributes dataset publicly licensed released images classes attributes awa dataset subset pascal voc data set object classes apascal images collected using yahoo image search engine ayahoo object classes image data set annotated binary attributes characterize visible objects dataset contains images bird classes challenging dataset awa designed recognition classes fewer images images annotated bounding boxes part locations attribute labels images annotations filtered multiple users amazon mechanical turk used benchmarks dataset categorization part localization class annotated binary attributes derived bird species ontology typical setting use classes auxiliary data holding target data setting adopted akata outdoor scene recognition osr dataset osr consists images categories attributes openness natrual etc average labelled pairs attribute training images graphs constructed thus extremely sparse pairwise attribute annotation collected amt kovashka pair labelled workers average comparisons majority voting image also belongs scene type public figure face database pubfig pubfig large face dataset images people collected internet parikh selected subset pubfig consisting images people attributes smiling round face annotate subset pairwise attribute annotation collected amazon mechanical turk pair labelled workers total training images respectively labelled average number compared pairs per attribute sun attribute dataset subset sun database scene categorization images classes images per class image annotated binary attributes describe scenes material surface properties well lighting conditions functions affordances general image layout unstructured social activity attribute usaa dataset usaa first benchmark video attribute dataset social activity video classification annotation groundtruth attributes annotated semantic class videos columbia consumer video ccv dataset select videos training testing respectively classes selected complex social group activities referring existing work video ontology attributes divided five broad classes actions objects scenes sounds camera movement directly using attributes input svm videos come classification accuracy illustrates challenge usaa dataset attributes informative sufficient variability even perfect knowledge attributes also insufficient perfect classification imagenet datasets imagenet used several different papers relatively different settings original imagenet dataset proposed full set imagenet contains million labeled images belonging roughly categories labelled human annotators using amazon mechanical turk amt tool starting part pascal visual object challenge annual competition called imagenet visual recognition challenge ilsvrc held ilsvrc uses subset imagenet roughly images categories robhrbach split ilsvrc data classes data employed training data ilsvrc source data testing part ilsvrc well data ilsvrc target data full sized imagenet data used oxford flower dataset oxford collection groups flowers flower images total images total flowers chosen common flower species united kingdom elhoseiny generated textual descriptions class dataset dataset another popular benchmark human action recognition videos consists video clips hours total annotated classes recently action recognition challenge created benchmark extending upon dataset used training set additional videos collected internet including background videos validation test videos video dataset fcvid fcvid contains web videos annotated manually categories categories cover wide range topics activities social events tailgate party procedural events making cake object appearances panda scenic videos beach standard split consists videos training videos testing activitynet dataset activitynet another largescale video dataset human activity recognition understanding released consisted video clips annotated activity classes totaling hours video comparing existing dataset activitynet action categories drinking beer drinking coffee activitynet settings trimmed untrimmed videos classes dataset awa pubfig osr imagenet ilsvrc ilsvrc oxford flower usaa activitynet fcvid instances classes million million million table attribute annotation level per class per image per image per image pairs per image pairs per image per image per image per class per image per video per video per video per video datasets zero shot recognition datasets divided three groups general image classification fine grained image classification video classification datasets discussion datasets tab roughly divide datasets three groups general image classification image classification video classification datasets datasets employed widely benchmark datasets many previous works however believe making comparison existing methods datasets several issues discussed features renaissance deep convolutional neural networks deep features used recognition note different types deep features overfeat resnet varying level semantic abstraction representation ability even type deep features different dataset slightly different parameters also different representative ability thus obvious without using type features possible conduct fair comparisons among different methods draw meaningful conclusion importantly possible improved performance one zero shot recognition could largely attributed better deep features used auxiliary data mentioned recognition formulated transfer learning setting size quality auxiliary data important overall performance recognition note auxiliary data include auxiliary source dataset also refer data concept ontology semantic word vectors example semantic word vectors trained linguistic articles general better semantically distributed trained small sized linguistic corpus similarly glove reported better cbow models therefore make fair comparison existing works another important factor use set auxiliary data evaluation many datasets agreed splits evaluation xian suggested new benchmark unifying evaluation protocols data splits vii uture esearch irections generalized realistic setting detailed review existing learning methods clear overall existing efforts focused rather restrictive impractical setting classification required new object classes new unseen classes though training sample present assumed known reality one wants progressively add new classes existing classes importantly needs achieved without jeopardizing ability model recognize existing seen classes furthermore assume new samples come set known unseen classes rather assumed belong either existing seen classes known unseen classes unknown unseen classes therefore foresee generalized setting adopted future learning work combining learning mentioned earlier problems learning closely related result many existing methods use similar models however somewhat surprising note serious efforts taken address two problems jointly particular learning would typically consider possibility training samples learning ignores fact textual knowledge new class always exploited existing zeroshot learning methods included learning experiments however typically use naive knn approach class prototype treated training sample together becomes recognition problem however shown existing learning methods prototype worth far one training sample thus treated differently thus expect future direction extending existing learning methods incorporating prototype super improve model learning beyond object categories far current learning efforts limited recognizing object categories however visual concepts far complicated relationships object categories particular beyond important visual concepts combined objects attribute often different meaning concept yellow yellow face yellow banana clearly differs learning attributes associated objects thus interesting future research direction curriculum learning lifelong learning setting model incrementally learn recognise new classes whilst keep capacity existing classes related problem thus select suitable new classes learn given existing classes shown sequence adding different classes clear impact model performance therefore useful investigate incorporate curriculum learning principles designing learning strategy viii onclusion paper reviewed recent advances zero shot recognition firstly different types semantic representations examined compared models used zero shot learning also investigated next beyond zero shot recognition open set recognition identified two important related topics thus reviewed finally common used datasets recognition reviewed number issues existing evaluations recognition methods discussed also point number research direction believe focus future recognition studies acknowledgments work supported part two grants nsf china european project yanwei supported program professor special appointment eastern scholar shanghai institutions higher learning eferences biederman recognition components theory human image understanding psychological review chen shrivastava gupta neil extracting visual knowledge web data ieee international conference computer vision pentina lampert bound lifelong learning international conference machine learning thrun mitchell lifelong robot learning robotics autonomous systems pan yang survey transfer learning ieee transactions data knowledge engineering vol patel gopalan chellappa visual domain adaptation survey recent advances ieee signal processing magazine spm palatucci hinton pomerleau mitchell learning semantic output codes nips kumar berg belhumeur nayar attribute simile classifiers face verification iccv lampert nickisch harmeling classification visual object categorization ieee tpami jiang sigal video emotion recognition transferred deep feature encodings icmr hospedales xiang gong attribute learning understanding unstructured social activity eccv liu kuipers savarese recognizing human actions attributes ieee conference computer vision pattern recognition hospedales xiang gong learning multimodal latent attributes ieee tpami blitzer foster kakade domain adaptation approach tech socher ganjoo sridhar bastani manning learning transfer nips thrun learning learn introduction kluwer academic publishers farhadi endres hoiem forsyth describing objects attributes cvpr wang zhang clothes search consumer photos via color matching attribute learning acm international conference multimedia online available http salakhutdinov torralba tenenbaum learning share visual appearance multiclass object detection ieee conference computer vision pattern recognition hwang sha grauman sharing features objects attributes ieee conference computer vision pattern recognition parikh grauman relative attributes iccv kovashka parikh grauman whittlesearch image search relative attribute feedback ieee conference computer vision pattern recognition berg berg shih automatic attribute discovery characterization noisy web data european conference computer vision hospedales xiong xiang gong yao wang robust estimation subjective visual properties crowdsourced pairwise labels ieee tpami gygli grabner riemenschneider nater gool interestingness images ieee international conference computer vision jiang yanranwang feng xue zheng yang understanding predicting interestingness videos aaai conference artificial intelligence isola parikh torralba oliva understanding intrinsic memorability images neural information processing systems isola xiao torralba oliva makes image memorable ieee conference computer vision pattern recognition dhar ordonez berg high level describable attributes predicting aesthetics interestingness ieee conference computer vision pattern recognition guo huang age synthesis estimation via faces survey ieee transactions pattern analysis machine intelligence chen gong xiang loy cumulative attribute space age crowd density estimation ieee conference computer vision pattern recognition lampert nickisch harmeling learning detect unseen object classes attribute transfer cvpr rudd gunther boult moon mixed objective optimization network recognition facial attributes eccv rudd boult moon mixed objective optimization network recognition facial attributes arxiv preprint wang cheng feris walk learn facial attribute representation learning egocentric video contextual data cvpr datta feris vaquero hierarchical ranking facial attributes ieee international conference automatic face gesture recognition ehrlich shields almaev amer facial attributes classification using representation learning proceedings ieee conference computer vision pattern recognition workshops jafri arabnia survey face recognition techniques journal information processing systems wang feng jiang xue multitask deep neural network joint face recognition facial attribute prediction acm icmr argyriou evgeniou pontil convex feature learning acm icmr fouhey gupta zisserman understanding higherorder shape via shape attributes ieee tpami vaquero feris tran brown hampapur turk people search surveillance environments ieee workshop applications computer vision wacv wang forsyth joint learning visual attributes object classes visual saliency ieee international conference computer vision ferrari zisserman learning visual attributes neural information processing systems shrivastava singh gupta constrained learning via attributes comparative attributes european conference computer vision biswas parikh simultaneous active learning classifiers attributes via relative feedback ieee conference computer vision pattern recognition parkash parikh attributes classifier feedback european conference computer vision singh lee localization ranking relative attributes eccv jaderberg simonyan zisserman spatial transformer networks advances neural information processing systems parikh grauman interactively building discriminative vocabulary nameable attributes ieee conference computer vision pattern recognition tang yan hong chua inferring semantic concepts images noisy tags acm international conference multimedia online available http habibian mensink snoek videostory new multimedia embedding recognition translation events acm hauptmann yan lin christel wactlar concepts fill semantic gap video retrieval case study broadcast news ieee transactions multimedia vol snoek huurnink hollink rijke schreiber worring adding semantics detectors video retrieval ieee transactions multimedia vol toderici aradhye pasca sbaiz yagnik finding meaning youtube tag recommendation category discovery ieee conference computer vision pattern recognition jiang sigal harnessing object scene semantics video understanding cvpr jain van gemert mensink snoek classifying localizing actions without video example iccv tang hua wang correlative linear neighborhood propagation video annotation ieee transactions systems man cybernetics part vol hua rui tang mei zhang correlative video annotation acm international conference multimedia online available http fergus bernal weiss torralba semantic label sharing learning many categories european conference computer vision rohrbach stark schiele evaluating knowledge transfer learning setting cvpr rohrbach stark szarvas gurevych schiele helps semantic relatedness knowledge transfer cvpr mensink gavves snoek costa statistics classification ieee conference computer vision pattern recognition gan yang zhu zhuang recognizing action using name approach ijcv gan lin yang melo hauptmann concepts alone exploring pairwise relationships video activity recognition aaai norouzi mikolov bengio singer shlens frome corrado dean learning convex combination semantic embeddings iclr zhang saligrama learning via joint latent similarity embedding cvpr recognition via structured prediction eccv yang hospedales xiang gong transductive learning british machine vision conference frome corrado shlens bengio dean ranzato mikolov devise deep embedding model nips huang socher manning improving word representations via global context multiple word prototypes association computational linguistics conference jiang sigal heterogeneous knowledge transfer video emotion recognition attribution summarization ieee tac sorokin forsyth utility data annotation amazon mechanical turk ieee conference computer vision pattern recognition workshops qin wang liu chen shao beyond semantic attributes discrete latent attributes learning recognition ieee signal processing letters vol chang yang long zhang hauptmann dynamic concept composition event detection aaai chang yang hauptmann xing semantic concept discovery event detection ijcai qin liu shao shen chen yunhongwang action recognition output codes cvpr yang hua wang zhang tag tagging towards descriptive keywords image content ieee transactions multimedia vol hospedales gong xiang learning tags unsegmented videos multiple human actions international conference data mining aradhye toderici yagnik learning annotate video content proc ieee int conf data mining workshops icdmw yang toderici discriminative tag learning youtube videos latent ieee conference computer vision pattern recognition luisier bondugula event detection using fusion weakly supervised concepts cvpr elhoseiny saleh elgammal write classifier zeroshot learning using purely textual descriptions ieee international conference computer vision december swersky fidler salakhutdinov predicting deep convolutional neural networks using textual descriptions iccv reed akata schiele learning deep representations visual descriptions cvpr miller wordnet lexical database english commun acm vol mikolov chen corrado dean efficient estimation word representation vector space proceedings workshop international conference learning representations mikolov sutskever chen corrado dean distributed representations words phrases compositionality neural information processing systems harris distributional structure dordrecht springer netherlands larochelle erhan bengio learning new tasks aaai aloimonos transfer learning object categorization training example european conference computer vision jayaraman grauman zero shot recognition unreliable attributes nips akata perronnin harchaoui schmid labelembedding classification cvpr weston bengio usunier large scale image annotation learning rank joint embeddings machine learning gavves mensink snoek attributes make sense segmented objects european conference computer vision guo learning multiclass classification aistats guo schuurmans classification label representation learning iccv jayaraman sha grauman decorrelating semantic visual attributes resisting urge share cvpr hwang sigal unified semantic embedding relating taxonomies attributes nips akata reed walter lee schiele evaluation output embeddings image classification cvpr torr embarrassingly simple approach learning icml hospedales xiang gong transductive embedding recognition annotation eccv yang hospedales unified perspective learning iclr mahajan sellamanickam nair joint learning framework attribute models object descriptions ieee international conference computer vision xiang kodirov gong object recognition semantic manifold distance cvpr deng ding jia frome murphy bengio neven adam object classification using label relation graphs eccv hardoon szedmak canonical correlation analysis overview application learning methods neural computation socher connecting modalities segmentation annotation images using unaligned text corpora ieee conference computer vision pattern recognition gong isard lazebnik embedding space modeling internet images tags semantics international journal computer vision hwang grauman learning relative importance objects tagged images retrieval search international journal computer vision wang gong translating topics words image annotation acm international conference conference information knowledge management liu aggarwal huang joint intermodal intramodal label transfers extremely rare unseen classes ieee tpami szegedy liu jia sermanet reed anguelov erhan vanhoucke rabinovich going deeper convolutions cvpr zhang xiang gong learning deep embedding model learning cvpr hospedales xiang gong transductive multiview learning ieee tpami changpinyo chao gong sha synthesized classifiers learning cvpr zhang gong shah fast image tagging cvpr wang mori discriminative latent model image region object tag correspondence neural information processing systems long liu shao shen ding han zeroshot learning conventional supervised classification unseen visual data synthesis cvpr kodirov xiang gong unsupervised domain adaptation learning iccv rohrbach ebert schiele transfer learning transductive setting nips wang lin zhuang recognition using dual mapping paths cvpr hospedales gong transductive action recognition embedding ijcv marco angeliki georgiana hubness pollution delving mapping learning acl low borgelt stober hubness phenomenon fact artifact dinu lazaridou baroni improving learning mitigating hubness problem iclr workshop shigeto suzuki hara shimbo matsumoto ridge regression hubness learning chao changpinyo gong empirical study analysis generalized learning object recognition wild eccv scheirer jain boult probability models open set recognition ieee tpami scheirer rocha sapkota boult towards open set recognition ieee tpami sigal learning cvpr dong feng zhang xue vocabularyinformed extreme value learning arxiv xian schiele akata learning good bad ugly cvpr bendale boult towards open world recognition cvpr sattar muller fritz bulling prediction search targets fixations settings cvpr gomes welling perona incremental learning nonparametric bayesian mixture models ieee conference computer vision pattern recognition diehl cauwenberghs svm incremental learning adaptation optimization ijcnn vol july rebuffi kolesnikov sperl lampert icarl incremental classifier representation learning cvpr guadarrama rodner saenko zhang farrell donahue darrell object retrieval robotics science systems rss guadarrama rodner saenko darrell understanding object descriptions robotics object retrieval detection journal international journal robotics research zheng gong xiang towards person reidentification verification ieee tpami zhao puig zhou fidler torralba open vocabulary scene parsing cvpr jankowski norbert duch wodzislaw grabczewski krzyszto computational intelligence springer science business media lake salakhutdinov learning inverting compositional causal process nips goldberger hinton roweis salakhutdinov neighbourhood components analysis advances neural information processing systems saul weiss bottou eds mit press online available http kulkarni whitney kohli tenenbaum deep convolutional inverse graphics network nips kulkarni mansinghka kohli tenenbaum inverse graphics probabilistic cad models lake salakhutdinov tenenbaum concept learning probabilistic program induction nips jvilalta drissi perspective view survey metalearning artificial intelligence review fergus perona bayesian approach unsupervised learning object categories ieee international conference computer vision learning object categories ieee tpami tommasi caputo know less learn knowledge transfer learning object categories british machine vision conference bart ullman learning novel classes single example feature replacement cvpr hertz hillel weinshall learning kernel function classification small training samples icml fleuret blanchard pattern recognition one example chopping nips amit fink uncovering shared structures multiclass classification icml wolf martin robust boosting learning examples cvpr torralba murphy freeman sharing visual features multiclass multiview object detection ieee tpami torralba murphy freeman using forest see trees exploiting context visual object detection localization commun acm bromley bentz bottou guyon lecun moore sackinger shah signature verification using siamese time delay neural network ijcai koch zemel salakhutdinov siamese neural networks image recognition icml deep learning workshok kienzle chellapilla personalized handwriting recognition via biased ularization icml quattoni collins darrell transfer learning image classification sparse prototype representations ieee conference computer vision pattern recognition fink object classification single example utilizing class relevance metrics nips wolf hassner taigman similarity kernel iccv weston bengio usunier wsabie scaling large vocabulary image annotation ijcai santoro bartunov botvinick wierstra lillicrap oneshot learning neural networks arx bertinetto henriques valmadre torr vedaldi learning learners nips habibian mensink snoek embeddings recognize events examples scarce ieee tpami vinyals blundell lillicrap kavukcuoglu wierstra matching networks one shot learning nips zhang dana nishino friction reflectance deep reflectance codes predicting physical surface properties oneshot reflectance eccv wang hebert learning small sample sets combining unsupervised cnns nips learning learn model regression networks easy small sample learning eccv finn abbeel levine fast adaptation deep networks proceedings international conference machine learning ser proceedings machine learning research precup teh vol international convention centre sydney australia pmlr aug online available http rusu rabinowitz desjardins soyer kirkpatrick kavukcuoglu pascanu hadsell progressive neural networks arxiv preprint lowe distinctive image features keypoints international journal computer vision vol van sande gevers snoek evaluation color descriptors object scene recognition ieee conference computer vision pattern recognition bosch zisserman munoz representing shape spatial pyramid kernel acm international conference image video retrieval bay ess tuytelaars gool surf speeded robust features computer vision image understanding vol shechtman irani matching local across images videos ieee conference computer vision pattern recognition donahue jia vinyals hoffman zhang tzeng darrell decaf deep convolutional activation feature generic visual recognition international conference machine learning wah branson welinder perona belongie dataset california institute technology tech oliva torralba modeling shape scene aholistic representation spatial envelope international journal computer vision vol patterson hays sun attribute database discovering annotating recognizing scene ieee conference computer vision pattern recognition xiao hays ehinger oliva torralba sun database scene recognition abbey zoo ieee conference computer vision pattern recognition jiang chang ellis loui consumer video understanding benchmark database evaluation human machine performance acm international conference multimedia retrieval zha mei wang hua building comprehensive ontology refine video concept detection proceedings international workshop workshop multimedia information retrieval ser mir new york usa acm online available http deng dong socher imagenet hierarchical image database cvpr nilsback zisserman automated flower classification large number classes proceedings indian conference computer vision graphics image processing soomro zamir shah dataset human action classes videos wild idrees zamir jiang gorban laptev sukthankar shah thumos challenge action recognition videos wild computer vision image understanding jiang wang xue chang exploiting feature class relationships video categorization regularized deep neural networks ieee tpami ghanem niebles activitynet video benchmark human activity understanding cvpr sermanet eigen zhang mathieu fergus lecun overfeat integrated recognition localization detection using convolutional networks iclr chatfield simonyan vedaldi zisserman return devil details delving deep convolutional nets bmvc zhang ren sun deep residual learning image recognition arxiv preprint pennington socher manning glove global vectors word representation emnlp xian akata sharma nguyen hein schiele latent embeddings classification cvpr pentina lampert lifelong learning tasks nips pentina sharmanska lampert curriculum learning multiple tasks cvpr place photo place photo yanwei received bsc degree information computing sciences nanjing university technology meng degree department computer science technology nanjing university china pursuing phd vision group eecs queen mary university london research interest attribute learning topic model learning rank video summarization image segmentation tao xiang received degree electrical computer engineering national university singapore currently reader associate professor school electronic engineering computer science queen mary university london research interests include computer vision machine learning data mining published papers international journals conferences leonid sigal associate professor university british columbia prior senior research scientist disney research completed brown university place received boston university photo brown university leonid research interests lie areas computer vision machine learning computer graphics leonid research emphasis machine learning statistical approaches visual recognition understanding analytics published papers venues journals fields including tpami ijcv cvpr iccv nips place photo place photo jiang professor school computer science fudan university china lab big video data analytics conducts research aspects extracting information big video data video event recognition recognition visual search work led many awards including inaugural acm china rising star award acm sigmm rising star award xiangyang xue xiangyang xue received degrees communication engineering xidian university china respectively currently professor computer science fudan university shanghai china research interests include multimedia information processing machine learning place photo shaogang gong received dphil degree keble college oxford university professor visual putation queen mary university since fellow institution electrical engineers fellow british computer society research ests include computer vision machine ing video analysis
2
reduction differential inclusions lyapunov stability jan rushikesh kamalapurkar warren dixon andrew teel paper locally lipschitz regular functions utilized identify remove infeasible directions differential inclusions resulting reduced differential inclusion smaller sense set containment original differential inclusion reduced inclusion utilized develop generalized notion time derivative locally lipschitz candidate lyapunov functions developed generalized derivative yields less conservative statements lyapunov stability results results matrosov results differential inclusions illustrative examples included demonstrate utility developed stability theorems index inclusions stability systems stability hybrid systems nonlinear systems ntroduction differential inclusions used model analyze large variety practical systems example systems utilize discontinuous control architectures sliding mode control multiple model sparse neural network adaptive control finite state machines gain scheduling control analyzed using theory differential inclusions differential inclusions also used analyze robustness bounded perturbations modeling errors model physical phenomena coulomb friction impact model differential games asymptotic properties trajectories differential inclusions typically analyzed using comparison functions several generalized notions directional derivative utilized characterize change value candidate lyapunov function along trajectories differential inclusions early results stability differential inclusions utilize nonsmooth candidate lyapunov functions based dini directional derivatives contingent derivatives chapter locally lipschitz regular candidate lyapunov functions stability results based clarke notion generalized directional derivatives developed results paden rushikesh kamalapurkar school mechanical aerospace engineering oklahoma state university stillwater usa warren dixon department mechanical aerospace engineering university florida gainesville usa wdixon andrew teel department electrical computer engineering university california santa barbara usa teel research supported part nsf award numbers onr grant number afosr award number opinions findings conclusions recommendations expressed material authors necessarily reflect views sponsoring agency sastry utilize clarke gradient develop generalized derivative along several stability theorems bacciotti ceragioli introduce another generalized derivative results sets smaller pointwise generated derivative hence lyapunov theorems generally less conservative counterparts lyapunov theorems developed bacciotti ceragioli also shown less conservative based dini contingent derivatives provided locally lipschitz regular candidate lyapunov functions employed proposition paper preliminary work locally lipschitz regular functions utilized identify remove infeasible directions differential inclusion yield pointwise smaller sense set containment equivalent differential inclusion using reduced differential inclusion novel generalization derivative concepts introduced locally lipschitz lyapunov functions developed technique yields less conservative statements lyapunov stability results invariance results invariancelike results nonautonomous systems theorem matrosov results differential inclusions paper organized follows section introduces notation sections iii review differential inclusions derivatives section locally lipschitz regular functions used identify infeasible directions differential inclusion section includes novel generalization notion respect differential inclusion sections vii viii state lyapunov stability results utilize novel definition generalized autonomous nonautonomous differential inclusions respectively illustrative examples developed stability theory less conservative results presented section summarizes article includes concluding remarks otation euclidean space denoted denotes lebesgue measure elements interpreted column vectors denotes vector transpose operator set positive integers excluding denoted denotes interval denotes interval unless otherwise specified denotes initial time interval assumed nonzero length notation used denote map subsets notations coa used denote convex hull closed convex hull closure interior boundary set respectively denotes concatenated vector notations interchangeably used denote set denotes set denotes set implies kak kbk notation used denote sets kyk respectively notation denotes absolute value cardinality set dist inf notations lip denote essentially bounded continuously differentiable locally lipschitz functions domain codomain respectively notation denotes zero element whenever clear context subscript suppressed iii ifferential inclusions let map consider differential inclusion locally absolutely continuous function called solution interval almost solution called complete maximal proper right extension also solution solution maximal set compact solution called precompact similar proposition shown solution extended maximal solution hence solution exists assumed maximal without loss generality facilitate discussion let open connected let let denotes first exit time solution min sup inf inf assumed since open throughout manuscript denotes set maximal solutions denotes set maximal solutions following notions weak strong forward invariance utilized paper definition set called weakly forward invariant respect called strongly forward invariant respect following development focuses maps admit local solutions definition let let interval map said admit local solutions solution starting exists interval sufficient conditions existence local solutions found theorem theorem following lemma stated facilitate analysis slight generalization proposition lemma let open connected let map admits solutions let solution set bounded every subinterval finite length complete proof sake contradiction assume since set bounded implies since locally absolutely continuous since positive constant thus hence uniformly continuous therefore extended continuous function since continuous open since admits solutions extended solution interval contradicts maximality hence complete hypothesis lemma set bounded every subinterval finite length met locally bounded precompact proposition valued derivatives focus article development less conservative lyapunov method analysis differential inclusions using clarke notion generalized directional derivatives gradients clarke gradients utilized paden sastry introduce following generalized notion time derivative locally lipschitz regular sense definition candidate lyapunov function respect differential inclusion definition regular function lip derivative respect defined denotes clarke gradient defined see also theorem lim set measure zero gradient defined lyapunov stability theorems developed using derivative exploit property every upper bound set also upper bound almost exists aforementioned fact consequence following proposition proposition let solution lip regular function exists almost almost proof see theorem notion derivative generalized via following definition definition regular function lip derivative respect defined derivative definition results less conservative statements lyapunov stability definition since contained within derivative definition evidenced example containment strict lyapunov stability theorems developed exploit property proposition also holds see lemma following notions lyapunov stability differential inclusions definition differential inclusion said strongly stable complete asymptotically stable stable complete globally asymptotically stable stable implies complete following proposition example typical lyapunov stability result differential inclusions utilizes derivatives candidate lyapunov function proposition combines theorem specialization theorem results paper stated terms stability origin extend straightforward manner stability arbitrary compact sets proposition let upper semicontinuous map compact nonempty convex values lip positive definite regular function max max stable proof see theorem theorem following section details novel generalization notion respect differential inclusion yields less conservative statements results proposition generalization relies observation locally lipschitz regular functions utilized reduce differential inclusions pointwise sets feasible directions smaller corresponding sets educed differential inclusions definition implies max max cases example proper subset max max implies lyapunov theorems based less conservative based tighter bound evolution moves along orbit obtained examining following alternative representation max max min max regular function lip map reduction defined proposition suggest directions affect stability properties solutions included directions map clarke gradient singleton key observation paper statement remains true even replaced arbitrary locally lipschitz regular function following proposition formalizes aforementioned observation proposition let map compact nonempty values admits solutions let lip positive definite regular function let lip regular function min max stable proof proposition follows general result stated theorem definitions translate systems respectively minimization serves maintain consistency notation fact redundant proposition indicates locally lipschitz regular functions help discover admissible directions point view lyapunov stability directions relevant different candidate lyapunov function proposition reduces proposition fact differential inclusion sense equivalent differential inclusion make equivalence precise following definition reduced differential inclusion introduced since almost following example demonstrates utility theorem definition let collection realvalued locally lipschitz regular functions defined map defined gui sgn denotes sign function defined satisfies lip addition since convex also regular proposition clarke gradient computed using sgn sgn called differential inclusion key utility reduction developed definition reduced differential sufficient characterize solutions demonstrated following theorem differential inclusion admits local solutions admits local solutions every solution exists subinterval restricted also solution theorem let collection countably many realvalued locally lipschitz regular functions defined solution almost proof proof closely follows proof lemma consider set times defined defined since solution since lip absolutely continuous hence objective show since function locally lipschitz time derivative expressed lim since regular uio max example consider differential inclusion defined sgn set given otherwise theorem invoked conclude every solution satisfies almost eneralized time derivatives proposition theorem suggest following notion generalized time derivative respect definition time derivative lip respect denoted defined min max uio min denote right left directional derivatives denotes derivative thus implies therefore see definitions right left directional derivatives derivative regular max max regular time derivative understood empty definition also facilitates unified treatment lyapunov stability theory using regular well nonregular candidate lyapunov functions candidate lyapunov function called lyapunov function time derivative negative definition lip positive definite called lyapunov function hence max thus judicious selection functions constructed less conservative derivatives naturally general time derivative satisfy chain rule stated proposition however satisfies following weak chain rule turns sufficient analysis differential inclusions theorem lip almost addition exists function almost proof let since open continuous nonempty consider set times defined using theorem facts absolutely continuous locally lipschitz regular arguments similar proof theorem used conclude thus theorem imply almost regular proposition see also theorem used conclude almost every thus theorem imply almost following sections develop relaxed stability theorems differential inclusions based properties time derivative hitherto established vii tability autonomous systems section lyapunov functions utilized formulate less conservative extensions stability invariance results autonomous differential inclusions form map lyapunov stability following lyapunov stability theorem consequence theorem theorem let let locally bounded map nonempty compact values admits solutions exists lyapunov function stable addition positive definite function asymptotically stable furthermore sublevel sets compact globally asymptotically stable proof given let min since continuous using theorem implies nonincreasing standard arguments see theorem shown compact strongly forward invariant hence every solution precompact lemma complete furthermore addition positive definite function theorem implies strictly decreasing provided asymptotic stability global asymptotic stability case sublevel sets compact follow standard arguments see section following example presents case tests based inconclusive theorem used establish asymptotic stability example let defined let defined consider differential inclusion candidate lyapunov function defined since derivatives bounded since neither shown negative semidefinite everywhere inequality insufficient draw conclusions regarding stability function defined see fig max min max min applications negative definite bound derivative candidate lyapunov function found easily invariance principle invoked following section develops invariance results using timederivatives invariance principle analogs invariance principle autonomous differential inclusions appear results estimates limiting invariant set less conservative developed obtained using locally lipschitz regular functions reduce admissible directions example following theorem extends invariance principle developed bacciotti ceragioli see theorem fig function satisfies lip addition since convex also regular proposition clarke gradient given sgn sgn sgn sgn sgn sgn sgn sgn sgn sgn otherwise case reduced inclusion given otherwise since since hence time derivative respect given max otherwise global asymptotic stability follows theorem theorem let locally bounded outer semicontinuous definition let nonempty convex compact let lip let compact strongly forward invariant set largest weakly forward invariant set implies complete dist proof existence follows theorem completeness follows lemma argument theorem indicates constant set definition note weakly invariant proposition let solution existence solution follows weak invariance since constant let set time instances defined regular arguments similar proof theorem used conclude since follows means almost regular proposition see also theorem used conclude almost every since follows means almost since hence since weakly invariant result dist implies dist following corollary illustrates one many alternative ways establish existence compact strongly forward invariant set needed apply theorem corollary let locally bounded outer semicontinuous definition let nonempty convex compact let lip let level set closednand connected component bounded largest weakly forward invariant set contained implies complete dist proof let maximal solution theorem nonincreasing continuity fact connected component fact closed imply impossible since nonincreasing hence strongly forward invariant since bounded assumption precompact hence complete lemma conclusion corollary follows theorem invariance principle often applied conclude asymptotic stability origin form following corollary complete solutions converge origin hence asymptotically stable sublevel sets compact hence selected arbitrarily large therefore globally asymptotically stable following example demonstrates utility developed invariance principle example let defined let defined consider differential inclusion candidate lyapunov function defined since derivatives bounded corollary let lip positive definite function let locally bounded outer semicontinuous definition let nonempty convex compact complete solution remains level set asymptotically stable addition sublevel sets compact globally asymptotically stable neither negative semidefinite everywhere hence inequality inconclusive let defined example differential inclusion corresponding given otherwise proof prove corollary first established every complete solution converges origin shown solutions complete let complete solution since decreasing bounded since continuous shown positive definiteness would imply hence prove using contradiction assume exists complete solution let solution solution exists since weakly invariant along solution contradicts hypothesis complete solution remains level set therefore select lyapunov stability fact strongly forward invariant follows theorem since compact solutions starting precompact hence complete lemma since time derivative respect given max max otherwise set corollary given since level sets bounded connected since largest invariant set contained within corollary invoked conclude solutions converge origin theorem almost hence given trajectory remain level set theorem state remain constant true inclusion therefore corollary invoked conclude system globally asymptotically stable sublevel sets compact globally uniformly asymptotically stable proof select let using theorem lemma viii tability onautonomous systems stating stability results nonautonomous systems following definitions stated definition differential inclusion said strongly uniformly stable complete globally uniformly stable uniformly stable complete uniformly asymptotically stable uniformly stable complete globally uniformly asymptotically stable uniformly stable complete results section stated terms stability entire state origin uniformity respect time extend straightforward manner partial stability uniformity respect part state see definition stability arbitrary compact sets lyapunov stability section basic stability result stated nonautonomous differential inclusions theorem let let let locally bounded map nonempty compact values admits solutions let lip positive definite function exist positive definite functions almost uniformly stable addition exists positive definite function using arguments similar theorem shown solutions satisfy every interval existence therefore solutions precompact hence complete lemma since continuous positive definite since independent uniform stability established rest proof identical section therefore omitted following example tests based inconclusive theorem invoked conclude global uniform asymptotic stability origin example let defined example let defined consider differential inclusion candidate lyapunov function defined candidate lyapunov function satisfies case since similar example derivatives satisfy bound inequality utilized therefore neither shown negative semidefinite everywhere function defined see fig max min max min almost uniformly asymptotically stable furthermore satisfies lip addition since convex also regular proposition clarke gradient given sgn sgn sgn sgn sgn sgn sgn sgn sgn given introduced example differential inclusion corresponding otherwise time derivative respect given max otherwise theorem invoked conclude globally uniformly asymptotically stable results applications adaptive control lyapunov methods commonly result semidefinite lyapunov functions candidate lyapunov functions time derivatives bounded negative semidefinite function state following theorem establishes fact function positive semidefinite asymptotically decays zero theorem let let let map nonempty compact values admits solutions let lip positive definite function satisfies positive semidefinite select locally bounded uniformly every solution complete bounded satisfies proof similar proof corollary established bounds imply map locally bounded uniformly every compact exists clf nonincreasing along solutions nonincreasing property clf used establish boundedness used prove existence uniform continuity complete solutions lemma lemma used conclude proof let using theorem lemma using arguments similar theorem shown solutions satisfy every interval existence therefore solutions precompact hence complete lemma establish uniform continuity solutions observed since locally bounded uniformly map uniformly bounded hence implies since locally absolutely continuous since positive constant thus hence uniformly continuous since continuous compact uniformly continuous hence uniformly continuous furthert monotonically increasing hence exists finite lemma lemma following example negative semidefinite upper bound theorem invoked conclude partial stability example let defined example let defined consider differential inclusion candidate lyapunov function defined candidate lyapunov function satisfies case since setvalued derivatives bounded inequality utilized thus neither negative semidefinite everywhere let defined differential inclusion corresponding given otherwise time derivative respect given max max otherwise theorem invoked conclude theorem counterparts widely used applications adaptive control establish boundedness system convergence state origin however since theorem partial stability result generally used establish convergence parameter estimates true values certain excitation conditions parameter convergence established using auxiliary functions theorems rely auxiliary functions fall umbrella matrosov theorems named seminal work following section focuses matrosov theorems matrosov theorems section less conservative generalization matrosov results uniform asymptotic stability nonautonomous systems developed particular nonsmooth version theorem nested matrosov theorem theorem generalized following definitions matrosov functions inspired definition let constants finite set functions said matrosov property relative definition let let constants let map nonempty compact values functions lip said matrosov functions set functions matrosov property relative max exists collection regular functions lip following technical lemmas aid proof matrosov theorem lemma given proof see claim lemma let proof see claim matrosov theorem stated follows theorem let let map nonempty compact values uniformly stable pair numbers exist matrosov functions uniformly asymptotically stable uniformly globally stable uniformly globally asymptotically stable proof select let let select repeated application lemmas shown let lip defined definition fix solution satisfies definition hence theorem almost fig function using definition almost let claim hence almost integrating using bound contradicts hence uniformly asymptotically stable uniformly globally stable selected arbitrarily large hence result global example let defined example let defined example let defined let defined follows uniform global stability concluded theorem let let let function defined see fig max min max min denotes open unit square centered origin satisfies lip addition since convex also regular proposition given sgn sgn sgn sgn sgn sgn clarke gradient sgn sgn sgn sgn sgn sgn sgn sgn sgn sgn sgn sgn differential inclusion corresponding given otherwise derivative given otherwise functions matrosov property furthermore since hence matrosov functions hence theorem uniformly globally asymptotically stable onclusion paper demonstrates locally lipschitz regular functions used identify infeasible directions differential inclusions infeasible directions removed yield smaller sense set containment equivalent differential inclusion reduction process utilized develop novel generalization derivative locally lipschitz candidate lyapunov functions less conservative statements lyapunov stability invariancelike results differential inclusions developed based reduction using locally lipschitz regular functions fact arbitrary locally lipschitz regular functions used reduce differential inclusions smaller sets admissible directions indicates may smallest set admissible directions corresponding differential inclusion research needed establish existence set find representation facilitates computation eferences filippov differential equations discontinuous sides kluwer academic publishers krasovskii subbotin control problems new york roxin stability general control systems differ vol paden sastry calculus computing filippov differential inclusion application variable structure control robot manipulators ieee trans circuits vol aubin cellina differential inclusions springer berlin shevitz paden lyapunov stability theory nonsmooth systems ieee trans autom control vol bacciotti ceragioli stability stabilization discontinuous systems nonsmooth lyapunov functions esaim control optim calc vol hui haddad bhat semistability stability differential inclusions discontinuous dynamical systems continuum equilibria ieee trans autom control vol ceragioli discontinuous ordinary differential equations stabilization dissertation universita firenze italy michel wang qualitative theory dynamical systems role stability preserving mappings new york marcel dekker moulay perruquetti finite time stability differential inclusions ima math control vol ryan integral invariance principle differential inclusions applications adaptive control siam control vol logemann ryan asymptotic behaviour nonlinear systems amer math vol bacciotti mazzi invariance principle nonlinear switched systems syst control vol haddad chellaboina nersesov impulsive hybrid dynamical systems princeton series applied mathematics fischer kamalapurkar dixon corollaries nonsmooth systems ieee trans autom control vol matrosov stability motion appl math vol panteley popovic teel nested matrosov theorem persistency excitation uniform convergence stable nonautonomous systems ieee trans autom control vol sanfelice teel asymptotic stability hybrid systems via nested matrosov functions ieee trans autom control vol jul paden panja globally asymptotically stable controller robot manipulators int control vol teel lee tan refinement matrosov theorem differential inclusions automatica vol ryan discontinuous feedback universal adaptive stabilization control uncertain systems springer rockafellar wets variational analysis springer science business media vol clarke optimization nonsmooth analysis siam moreau valadier chain rule involving vector functions bounded variation funct vol khalil nonlinear systems upper saddle river prentice hall vidyasagar nonlinear systems analysis siam alvarez orlov acho invariance principle discontinuous dynamic systems applications coulomb friction oscillator asme dyn syst meas control vol rushikesh kamalapurkar received degrees respectively mechanical aerospace engineering department university florida working year postdoctoral research fellow warren dixon selected mae postdoctoral teaching fellow joined school mechanical aerospace engineering oklahoma state university assistant professor primary research interest intelligent optimal control uncertain nonlinear dynamical systems published book chapters peer reviewed journal papers peer reviewed conference papers work recognized university florida department mechanical aerospace engineering best dissertation award university florida department mechanical aerospace engineering outstanding graduate research award warren dixon received department electrical computer engineering clemson university worked research staff member eugene wigner fellow oak ridge national laboratory ornl joined university florida mechanical aerospace engineering department main research interest development application control techniques uncertain nonlinear systems work recognized american automatic control council aacc hugo schuck best paper award fred ellersick award best overall milcom paper university florida college engineering doctoral dissertation mentoring award american society mechanical engineers asme dynamics systems control division outstanding young investigator award ieee robotics automation society ras early academic career award nsf career award department energy outstanding mentor award ornl early career award engineering achievement fellow asme ieee ieee control systems society css distinguished lecturer served director operations executive committee ieee css board governors awarded air force commander public service award contributions air force science advisory board currently formerly associate editor asme journal journal dynamic systems measurement control automatica ieee transactions systems man cybernetics part cybernetics international journal robust nonlinear control andrew teel received degree engineering sciences dartmouth college hanover new hampshire degrees electrical engineering university california berkeley respectively receiving postdoctoral fellow ecole des mines paris fontainebleau france joined faculty electrical engineering department university minnesota assistant professor subsequently joined faculty electrical computer engineering department university california santa barbara currently distinguished professor director center control dynamical systems computation research interests nonlinear hybrid dynamical systems focus stability analysis control design received nsf research initiation career awards ieee leon kirchmayer prize paper award george axelby outstanding paper award recipient first siam control systems theory prize recipient donald eckman award hugo schuck best paper award given american automatic control council also received ieee control systems magazine outstanding paper award received certificate excellent achievements ifac technical committee nonlinear control systems automatica fellow ieee ifac
3
algorithmic framework labeling network maps benjamin dec university germany karlsruhe institute technology germany abstract drawing network maps automatically comprises two challenging steps namely laying map placing labels paper tackle problem labeling already existing network map considering application metro maps present flexible versatile labeling model subsumes different labeling styles show labeling single line network even make restricting assumptions labeling style used model restricted variant model introduce efficient algorithm optimally labels single line respect given cost function based algorithm present general sophisticated workflow multiple metro lines experimentally evaluated metro maps introduction label placement geographic network visualization classical problems cartography independently received attention computer scientists label placement usually deals annotating point line area features interest map text labels associations features labels clear map kept legible geographic network visualization hand often aims geometrically distorted representation reality allows information connectivity travel times required navigation actions retrieved easily computing good network visualization thus related finding layout graph certain favorable properties example avoid visual clutter metro maps octilinear graph layout often chosen orientation edge multiple alternatively one may choose curvilinear graph layout display metro lines curves computing graph layout metro map labeling stops considered two different problems solved succession also integrated solutions suggested nevertheless practice metro maps often drawn manually cartographers designers existing algorithms achieve results sufficient quality adequate time example wolff report method needed hours minutes compute labeled metro map sydney present article unlabeled map instance obtained results obtained without proof optimality similar optimality gaps hand wang chi present algorithm creates graph layout labeling within one second guarantee labels overlap metro lines integrated approach computing graph layout labeling stops allows consideration given quality criteria final visualization hand treating problems separately probably reduce computation time moreover consider labeling metro map interesting problem since situations layout network given part input must changed workflow example cartographer may want draw alter graph layout manually using automatic method place labels preliminary version paper appeared proc int conf computing combinatorics cocoon volume lect notes comput pages research initiated dagstuhl seminar drawing graphs maps curves april probably test multiple different labeling styles drawing hence labeling algorithm needed rather flexible dealing different labeling styles paper given layout metro map consisting several metro lines stops also called stations located stop given name placed close position first introduce versatile general model labeling metro maps see section like many labeling algorithms point sets algorithm uses discrete set candidate labels point often label represented rectangle wrapping text since also want use curved labels however represent label simple polygon approximates fat curve curve certain width reflecting text height prove even simple model labeling single metro line considering different labeling styles hence restrict set candidates satisfying certain properties allows solve problem one metro line time number stops see section algorithm optimizes labeling respect cost function based imhof classical criteria cartographic quality utilizing algorithm present efficient heuristic labeling metro map consisting multiple metro lines see section method similar heuristic presented kakoulis tollis sense discards label candidates establish set preconditions allow efficient exact solution model quality general one kakoulis tollis however takes quality individual labels also quality pairs labels consecutive metro stations account finally evaluate approach presenting experiments conducted realistic metro maps see section note stops metro lines refer generally points interest lines kind network map address labeling styles octilinear graph layouts curvilinear graph layouts use curves general model behind method however subsumes limited particular styles labeling model assume metro lines given directed curves plane described polylines example derived approximating curves denote set metro lines stops metro line given ordered set points going beginning end two stops write lies denote union stops among metro lines call pair metro map stop given name placed close contrast previous work follow traditional map labeling abstracting given text bounding boxes instead model label stop simple polygon example label could derived approximating fat curve prescribing name stop ssee fig stop given set labels also call candidates set denoted since names disturb map content little possible strictly forbid overlaps labels lines well overlaps stop must labeled hence set called labeling two labels intersect label intersects metro line stop exactly one label definition metromaplabeling given metro map candidates cost function find exists optimal labeling labeling model allows create arbitrarily shaped label candidates metro map evaluation considered two different labeling styles first style octilinstyle creates stop set octilinear rectangles label candidates see fig use style octilinear maps second style curvedstyle creates stop set fat curves label candidates approximated simple polygons see fig use style curvilinear metro maps fig construction curved candidates construction single label candidates stop order adapt curvilinear style metro map basic idea label perpendicularly emanates stop respect metro line becomes horizontal sustain legibility following section motivate choice candidates based cartographic criteria give detailed technical descriptions labeling styles two examples labeling styles extracted rules generating label candidates imhof general principles requirements map labeling schematic network maps need legibility implies must destroy underlying design principle clutter end generate candidate labels adhere schematics network use straight horizontal diagonal labels octilinear layouts curved labels curvilinear layouts describe precisely defined labeling styles curvedstyle octilinstyle used curvilinear layouts octilinear layouts respectively curvilinear metro maps curvedstyle assume given metro map curvilinear order achieve clear graphic association label corresponding point construct simple polygon prescribing candidate label based curve possibly segment emanates candidate label continuous section directly start certain configurable distance define end candidate label based text length assign width curve section represent text height case lies single curved line require perpendicular enhance angular resolution final drawing bending towards horizontal direction avoid steep labels approximate simple polygon consisting constant number line segments describe construction single candidate specifically stop metro line create constant number curved labels adapting curvilinear style metro map basic idea label perpendicularly emanates respect becomes horizontal sustain legibility see fig let normalized normal vector let constants define fat cubic curve following four control points see fig sgn sgn sgn otherwise define thickness predefined height label let starts length name let curve mirroring let length longest name stop let orientation less equal set otherwise hence almost vertical fig construction octilinear candidates stop lies horizontal segment lies diagonal segment lies vertical segment lies crossing two diagonal segments lies crossing vertical horizontal segment lies crossing vertical diagonal segment therefore almost horizontal also add labels pointing opposite experiments let labels start certain offset order avoid intersections octilinear metro maps octilinstyle assume metro map octilinear model labels horizontal diagonal rectangles let line segment lies let rectangle bounding box name let circle around radius place labels touch border intersect interior hence labels offset horizontal place five copies follows see fig place corner midpoint bottom edge corner coincides topmost point rotate counterclockwise place midpoint left side touches midpoint lies diagonal finally obtained mirroring vertical line mirroring horizontal line obtain rectangles respectively set diagonal create candidates manner case horizontal see fig however create candidates candidates horizontally aligned vertical place three copies right follows see fig rotate counterclockwise clockwise place midpoints left edges touch mirroring vertical line defines rectangles set case crossing two metro lines create candidates differently crossing two diagonals create candidates shown fig crossing horizontal vertical segment create labels shown fig crossing diagonal horizontal segment create labels shown fig analogously create labels crossing vertical diagonal segment remark stop lies multiple metro lines apply similar constructions labels placed angle bisectors crossing lines computational complexity first study computational complexity metromaplabeling assuming labels either based octilinstyle curvedstyle particular show problem metro map consists one line proof uses reduction problem monotone planar based given style create set clauses metro map labeling satisfiable proof easily adapted labeling styles note complexity labeling points using finite set axisaligned rectangular label candidates problem see however fig illustration proof formula clauses represented metro map truth assignment true true false false false gray graph represents adjacencies used gadgets solid lines represent spanning tree used merge polygons one simple polygon exemplarily polygons merged one polygon single components illustrated fig since necessarily use rectangles labels since considered labeling styles labels placed along metro lines obvious reduce labeling instance instance metromaplabeling order show prove decide whether metro map labeling based given labeling style theorem metromaplabeling labels based octilinstyle curvedstyle even metro map one metro line proof illustrations use octilinstyle constructions done based curvedstyle see end proof first show problem deciding whether labeling lies first create stop candidates based given labeling style recall candidate constant size guess stop label belongs desired labeling obviously decide polynomial time whether labeling performing basically intersection tests perform reduction planar monotone problem let boolean formula conjunctive normal form consists variables clauses furthermore clause contains three literals formula induces graph follows contains variable vertex contains clause vertex two vertices connected edge represents variable represents clause contained call clause positive negative contains positive negative literals formula instance planar monotone satisfies following requirements monotone clause either positive negative graph planar rectilinear plane embedding vertices representing variables placed horizontal line vertices representing negative clauses placed vertices representing positive clauses placed negative positive fig illustration gadgets selectable labels filled labels filled ports marked dashed square chain gadget length fork gadget clause gadget variable gadget three negative three positive ports edges drawn respective side planar monotone asks whether satisfiable using stops lying single horizontal vertical segments construct metro map mimics embedding particular consist one metro line connects stops stops candidates simulate variables clauses prove labeling satisfiable refer fig sketch construction first define gadgets simulating variables clauses connecting structures gadget consists set stops lie border simple polygon later use polygon prescribe shape metro line chain chain gadget represents transmits truth values variables clauses mimicking embeddings edges chain consists even number stops lie vertical horizontal segments see fig hence respect given labeling style stop predefined set candidates ksi stop two specially marked candidates lie opposite sides segment example see filled blue labels fig say labels selectable define gadget labels labels selected labeling end lay metro line intersect selectable label labels selectable stops placed following conditions satisfied label intersects label except intersections mentioned intersection selectable labels different stops segments stops connected polylines result simple polygon intersecting labels except selectable labels labels intersect selectable labels call ports chain later use ports connect gadgets chain arrange gadgets two ports intersect selectable label assign polarization selectable label labels negative labels positive consider labeling chain assuming interpreted metro line cut point order obtain open curve construction selectable labels contained particular observe negative port contained positive labels belong analogously positive port contained negative labels belong use behavior represent transmit truth values chain fork fork gadget splits incoming chain two outgoing chains transmits truth value represented incoming chain two outgoing chains fork consists three stops placed vertical segments placed horizontal segment see fig analogously chain stop two selectable labels arrange stops following conditions satisfied labels intersect apart two intersections selectable label intersects selectable label segments stops connected polylines result simple polygon intersecting labels except selectable labels label incoming port labels outgoing ports fork distinguish two types forks assigning different polarizations selectable labels negative positive fork labels positive negative labels negative positive hence incoming port positive negative outgoings ports negative positive consider labeling fork assuming interpreted metro line construction selectable labels belong incoming port belong outgoing ports belong finally one outgoing port belong incoming port belongs clause clause gadget represents clause given instance forms chain length addition three ports instead two ports see fig end one stops three selectable labels one intersecting selectable label stop two lying opposite side stop segment without intersecting selectable label stop gadget placed position vertex located drawing see fig observe labeling clause gadget always contains least one port assign polarization selectable labels variable variable gadget represents single variable forms composition chains forks connected ports see fig precisely let number clauses negative literal occurs let number clauses positive literal occurs along horizontal line vertex placed drawing place horizontal chain place sequence negative forks left sequence positive forks right negative incoming port connected positive port chain two consecutive forks connected chain connects positive outgoing port negative incoming port analogously positive incoming port connected negative port chain two consecutive forks connected chain connects negative outgoing port positive incoming port observe gadget free ports arrange forks free ports lie free ports lie consider labeling variable construction forks chains one positive free port contained negative free ports must contained analogously one negative free port contained positive free ports must contained using additional chains connect positive free ports positive clauses negative ports negative clauses correspondingly see fig precisely assume variable contained positive clause negative clauses handled analogously respect drawing positive free port gadget connected negative port chain whose positive port connected free port gadget note easily choose simple polygons enclosing gadgets intersect defining surround gadgets tightly fig illustration gadgets based octilinstyle selectable labels filled labels filled chain gadget length fork gadget clause gadget one metro line construct polygons enclosing single gadgets intersect sketch polygons merged single simple polygon cutting polygon point obtain polyline prescribing desired metro line construct graph follows polygons gadgets vertices graph edge contained corresponding gadgets polygons connected ports see fig since planar gadgets mimic embedding hard see also planar construct spanning tree edge also contained merge obtaining new simple polygon see example fig end cut polylines connect four end points two new polylines result simple polygon particular ensure new polygon intersect polygon intersects labels together correspondingly contract edge note contracting edges remains tree repeat procedure consists single vertex one simple polygon left soundness hard see construction polynomial size given formula assume satisfiable show construct labeling constructed metro map variable true false given truth assignment put negative positive labels corresponding variable gadget connected chains construction labels intersect remains select labels clause gadgets consider positive clause negative clauses handled analogously since satisfiable contains variable true given truth assignment set contains negative labels chain connecting gadget gadget positive labels chain hence add port gadget connected chain without creating intersections second stop clause put selectable label port apply procedure positive negative clauses without creating intersections yields labeling constructed metro map finally assume given labeling constructed metro map consider clause gadget positive clause negative clauses handled analogously construction contains least one port gadget port connected chain connected gadget variable set variable true apply procedure clauses negative clauses set corresponding variable false since contained negative labels chain contained positive labels hence positive ports variable gadget also contained previous reasoning implies negative ports gadget contained consequently applying similar procedure negative clauses happen set false altogether implies valid truth assignment remarks fig illustrates construction gadgets curvedstyle note fork gadget clause gadget chain gadget rely concrete labeling style using curvedstyle stop lying vertical segment exactly two different distinguish labels one lies left one lies right consecutive switchover switchover fig consecutive stops switchovers candidates satisfy transitivity property candidates satisfy transitivity property labeling algorithm single metro line study case given instance consists one metro line based cartographic criteria introduce three additional assumptions allows efficiently solve metromaplabeling stop assume candidate assigned one side either left candidate assigned left side right candidate assigned right side appropriately defined candidate sets assignments correspond geometric positions candidates left right candidates lie left right hand side assumption separated labels candidates assigned different sides intersect assumption normally real restriction appropriately defined candidate sets realistic metro lines line separates types candidates geometrically require call transitivity property assumption transitivity property three stops three candidates assigned side holds neither intersect intersect also intersect see also fig experiments established assumption assumption removing candidates greedily section show metro maps considered candidate sets remove labels indicates assumptions little influence labelings two stops consecutive stop see fig two consecutive stops say two candidates consecutive denote set contains pair consecutive labels two consecutive labels form switchover assigned opposite sides denotes ordered set indicating order stops two switchovers consecutive switchover define set switchovers set consecutive switchovers based cartographic criteria extracted imhof general principles requirements map labeling require cost function following form see also section detailed motivation assumption linear cost function require rates single label rates two consecutive labels rates two consecutive switchovers particular define penalizes following structures sustain readbility steep highly curved labels consecutive labels lie different sides shaped differently consecutive switchovers placed close dummy switchover dummy switchover fig illustrations labeling single metro line instance acyclic directed graph based labels instance labeling switchovers separate labeling instance satisfies assumption call metromaplabeling also softmetrolinelabeling introduce algorithm solves problem time max note typically constant assume contains candidates intersect labels one side first assume candidate labels assigned either left right side without loss generality left side two stops denote instance restricted stops denote first stop last stop transitivity property directly yields next lemma lemma let stops labeling intersecting also intersects proof recall proof assume satisfies assumptions assume sake contradiction candidate intersects see fig since labeling labels intersect hence neither intersect since three labels assigned side transitivity property holds directly contradicts intersect hence lemma states separates candidates stops succeeding use observation follows based define directed acyclic graph see fig graph contains vertex candidate two vertices call source target let denote candidate belongs vertex pair graph contains edge stop lies directly stop furthermore intersect vertex candidate graph contains edge vertex candidate graph contains edge edge define cost follows set set set path path starts ends costs path minimum costs among paths shortest path lemma path labeling labeling path proof recall proof assume satisfies assumptions let path let denotes vertices edges show labeling obviously stop set contains exactly one candidate construction edge labels intersect hence lemma label intersect label stop occurs stop hence set labeling let labels order stops holds let arbitrary labeling show path let two consecutive stops let corresponding labels since intersect corresponding vertices adjacent hence labels induce path let labels order stops using equation obtain lemma particular proves shortest path corresponds optimal labeling due chapter constructed time using dynamic programming approach call minpath particular minpath considers edge vertices vertex incoming edges implies edges since minpath considers edge compute edges demand saves storage theorem softmetrolinelabeling optimally solved time space labels sides candidates lie sides metro line solve problem utilizing algorithm case consider labeling let two switchovers lies switchover lies see fig roughly spoken induce instance lies instance lies switchovers lemma let stops consecutive let labeling switchover intersecting intersects proof recall proof assume satisfies assumptions assume sake contradiction label intersects without intersecting since intersect due assumption assigned side let assigned left hand side let left candidate right candidate analogous arguments hold opposite case since labeling labels intersect hence neither intersect since assigned side transitivity property must hold however contradicts intersect hence lemma yields instance choose labeling long labeling intersect label composes labeling instance one labeling instance use observation follows let two switchovers let stops let stops respectively see fig assume let instance restricted stops indicates stops belong instance stops switchovers compatible assigned side labeling contains furthermore switchovers labeling let optimal labeling among labelings denote labeling utilizing theorem obtain time labeling instance let label first stop let label last stop head tail technical reasons extend dummy stops stop introduce dummy switchover dummy switchover define compatible switchovers compatible labeling conceptually dummy switchover consists two labels assigned sides neither influence cost labeling hence assume contained labeling similar case define directed acyclic graph graph contains vertex switchover let denote switchover belongs vertex particular let denote vertex denote vertex pair graph contains edge compatible cost edge special case share stop set let path let edges vertex write instead denote set lemma graph path labeling let shortest path optimal labeling proof recall proof assume satisfies assumption construction directly follows path labeling first show labeling afterwards prove labeling holds let edges vertex write instead show induction labeling wei altogether implies labeling construction set labeling since dummy switchover hence holds consider set first argue labeling induction set labeling construction set labeling instance since compatible label intersects label lemma two labels intersect show wei induction wei since holds distinguish two cases first assume stop common let derive equation wem equality holds due definition wem equality induction true assume stop common let derive equation iii wem equality iii holds due definition wem equality induction true altogether obtain labeling finally show labeling assume sake contradiction labeling let switchovers observe two consecutive switchovers compatible hence edge consequently edges form path first two claims lemma labeling since shortest path holds show deriving contradiction end recall set optimal labeling instance labeling switchover contained let labelings restricted particular switchover must two consecutive switchovers however contradicts optimality consequently holds yielding claimed contradiction lemma shortest path corresponds optimal labeling exists using minpath construct time since contains switchovers graph contains vertices edges minpath considers edge compute edges demand needs storage compute costs incoming edges vertex utilizing case proceeding naively need time per edge yields time total reusing already computed information improve result follows let denote incoming edges stop first label lie stop first label let let graph instance considering candidates lie side stop second label stop let shortest path denote source target respectively observe excluding source target graph since shortest path also shortest path among paths end vertices assume without loss optimality excluding path sub path therefore need compute use order gain costs incoming edges hence basically apply vertex algorithm case using time per vertex compute costs time per edge yields next result theorem softmetrolinelabeling optimally solved time space step step step step fig schematic illustration presented workflow step generation candidates step scaling creation initial labeling red labels step candidates step solving metro lines independently cost function section motivate cost function introduced section given metro map consists single metro line generated candidates rate labeling using following cost function rates single label rates two consecutive labels rates two consecutive switchovers see assumption definition function relies following considerations based imhof general principles requirements map labeling respect steep highly curved labels difficult read others introduce cost candidate label imhof notes names assist directly revealing spatial situation exemplifies principle maps show text still conveying relevant geographic information transfer idea metro maps favor solutions labels two consecutive stops metro line similar properties two labels placed side line slopes curvatures similar map satisfying criterion user need find correspondence basis instead user identify metro lines sequences stops based label groups example makes easier count stops till destination course also improvement terms legibility model consider similarity consecutive labels introducing cost pair candidates belong consecutive stops penalize consecutive candidates lie opposite sides metro line disturb overall label placement add cost objective value solution candidates selected since minimize total costs solution cost pair candidates low similar labels lie side implied switchovers occur regular distances cluttered hence pair two consecutive switchovers solution add cost objective value solution depends distance smaller distance greater cost describe precisely defined evaluation definitions depend applied labeling style curved labels using curvedstyle labels define cost functions follows label let start point let end point recall derived labels curves let vector connecting let denote angle define hence angle measure horizontal vector whereby smaller value horizontal defined cost function rating single label hence penalize steep labels defined rating two consecutive labels follows point different else switchovers cost cases difference angle hence latter case penalize labels differently aligned finally rating two switchovers line defined number stops particular effects labeling equally sized sequences labels lying side metro line rated better labeling sequences sized irregularly octilinear labels using octilinstyle labels define cost functions follows recall use octilinstyle octilinear metro maps label let segment stop placed horizontal diagonal set vertical diagonal horizontal set cases set functions defined way curvedstyle multiple metro lines section consider problem given metro map consisting multiple metro lines present algorithm creates labeling two phases phase divided two steps see fig schematic illustration first phase algorithm creates set label candidates ensures exists least one labeling metro map second phase computes labeling end makes use labeling algorithm single metro line see section order rate extend cost function single metro line multiple metro lines require labeling restricted metro line rates single label plc rates two consecutive labels rates two consecutive switchovers particular satisfies assumption single metro line altogether workflow yields heuristic relies conjecture using optimal algorithms single steps sufficient obtain good labelings evaluation call approach dpalg first phase candidate generation first create label candidates enforce labeling given instance step candidate creation depending labeling style generate discrete set candidate labels every stop hence given instance particular assume candidate assigned one side metro line namely left right side furthermore cost function satisfying assumption metro line step scaling since stop metro map must labeled first apply transformation given candidates ensure least one labeling metro map end first determine stop metro line two candidates assigned opposite sides specifically among candidates assigned right hand side intersect metro line take candidate minimum costs minimal candidate exist label intersects least one metro line take label assigned right hand side manner choose candidate assigned left hand side let enforce labeling stop contains label check whether set admits labeling later describe specifically exists continue third step algorithm using input otherwise scale candidates smaller constant factor repeat described procedure sampling scaling range xmin xmax find manner scaling factor xmin xmax candidates admits labeling choose large possible could find chosen xmin xmax sampling appropriately abort algorithm stating algorithm could find labeling next describe check whether labeling since set contains two candidates make use formulation model labeling problem stop candidate introduce boolean variable boolean variables induce set true following formulas satisfiable labeling intersects intersect first formula ensures label solution intersects metro line second one avoids overlaps labels two last formulas enforce stop exactly one label contained solution according linear time respect number variables formulas satisfiability checked introduce variables instantiate second formula times pair candidates may overlap remaining formulas instantiated time hence total running time denotes number scaling steps second phase candidate selection assume given scaled instance labeling previous phase apply candidates discarding candidates two stops candidates different metro lines intersect metro line satisfies assumption assumption never remove label candidate set however ensure always feasible solution finally considering metro lines independently select stop candidate label using dynamic program described section step candidate first ensure satisfies assumption assumption two candidates metro line intersect assigned opposite sides delete one labels follows delete delete otherwise none contained delete label higher costs ties broken arbitrarily afterwards assumption satisfied metro line iterate stops beginning end delete candidates violating assumption described follows let currently considered stop candidate check stop whether label intersects exists check whether candidate stop intersects case delete contained otherwise delete note contained construction metro line instance satisfies assumption finally ensure metro lines become independent sense candidates stops belonging different metro lines intersect candidate intersects metro line hence step metro lines independently labeled resulting labelings compose labeling first rank candidates follows metro line construct labeling using dynamic program two sided case presented section due previous step labelings exist note two metro lines may labels intersect candidate set val metro line val otherwise candidate smaller rank candidate val val val val ties broken arbitrarily greedily remove candidates metro lines independent create conflict graph vertices candidates edges model intersections candidates two vertices adjacent corresponding labels intersect delete vertices whose corresponding labels intersect metro line afterwards starting construct independent set follows first add vertices whose labels contained delete neighbors since labels intersect independent set original conflict graph increasing order ranks remove vertex neighbors time add obviously sustaining independent set update stop candidate set vertex contained since labels contained labeling based new candidate set step final candidate selection let instance applying third step labeling created first step previous step metro lines sense independent candidates stops belonging different metro lines intersect satisfy assumption assumption hence use dynamic programming approach section order label independently composition labelings labeling alternative approaches present three approaches ilpalg scalealg greedyalg adaptions workflow use experimentally evaluate approach alternatives greedyalg simple fast greedy algorithm ilpalg scalealg based ilp formulation integer linear programming formulation assess impact second phase approach present integer linear programming formulation optimally solves metromaplabeling respect required cost function let instance problem obtain first phase approach first note apply specific cost function see section cost function rates two consecutive switchovers labeling rely actual switchovers positions corresponding metro line hence may assume expects stops assumption helps reduce number variables candidate define binary variable interpret selected labeling introduce following constraints intersect metro line intersect moreover metro line define following variables end let set consecutive labels let stops particular order interpret selected labeling interpret selected labels stops form switchover interpret selected labels form two consecutive switchovers metro line introduce following constraints constraints denotes set labels lie left denotes set labels lie right ksi ksi define metro line following linear term subject presented constraints minimize consider variable assignment minimizes satisfies constraints show optimal labeling given instance respect given cost function first valid labeling constraint ensures label intersects metro line constraint labels pairwise disjoint finally constraint stop exactly one label contained particular metro line set valid labeling show metro line since minimize implies optimality obviously label cost taken account belongs constraint two consecutive labels contained minimality least one labels belong holds hence taken account belong constraint constraint holds labels consecutive stops form switchover hence constraint holds labels well form switchovers furthermore switchover switchovers consecutive hand minimality cases holds hence taken account consecutive switchovers altogether obtain following theorem theorem given optimal variable assignment presented ilp formulation set optimal labeling respect approach ilpalg simply replaces second phase dpalg ilp formulation hence solves second phase optimally approach scalealg samples predefined scaling range xmin xmax also used dpalg scale scales candidates correspondingly using ilp formulation checks whether candidates admit labeling hence approximately obtain greatest scaling factor admits labeling algorithm greedyalg replaces dynamic programming approach workflow follows starting solution enforced step greedy algorithm iterates stops stop selects candidate minimizes among valid candidates candidate selected previous stop candidate successive stop replaces candidate evaluation evaluate approach presented section case study metro systems sydney stops vienna stops used benchmarks sydney took curved layout fig octilinear layouts fig fig vienna took curved layout fig ocitlinear layouts fig see also table overview instances since metro lines sydney paths disassembled metro lines single paths hand tried extract long paths possible hence instances sydney decompose lines instances vienna lines took positions stops presented corresponding papers curved layout sydney removed stops tempe martin place fig marked red dots stops tightly enclosed metro lines placement small labels possible much problem approach given layout approach designer would need change layout curvilinear layouts used labels curvedstyle octilinear layouts used labels octilinstyle layouts authors present labelings see fig fig fig layout present labelings ratio table overview considered instances style style map applied labels smax applied scale factors scale smax lower bound largest possible scaling factor obtained scalealg scale computed first phase workflow instance style octi octi octi curved octi octi curved reference fig fig fig fig fig fig fig smax experiments performed single core intel core cpu processor machine clocked ghz ram implementation written java instance algorithm conducted runs took average running times time started runs performed runs without measuring running time order warm virtual machine java runtime environment build table running times seconds workflow broken two phases single steps applicable algo applied algorithm times less seconds marked algo instance layout phase creation phase selection total dpalg greedyalg octi scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg curved scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg curved scalealg ilpalg table experimental results sydney vienna algo applied algorithm candidates values concerning candidates candidates first third step labels removed establish assumption assumption switchovers cost ratio costs labeling obtained procedure dpalg greedyalg ilpalg scalealg labeling obtained procedure ilpalg sequence values concerning sequences labels lying side metro line shortest sequence max longest sequence length sequences instance layout algo candidates cost sequence min max avg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg curved scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg octi scalealg ilpalg dpalg greedyalg curved scalealg ilpalg dpalg curvedstyle layout fig greedyalg curvedstyle layout fig fig labelings sydney using dpalg greedyalg oracle order measure actual running times algorithms measure time virtual machine java spends loading classes optimizing byte code table table present quantitative results considered instances labelings found fig labelings created dpalg found fig fig fig respectively labelings instances found appendix first note respect total number created candidates labels removed enforcing assumption assumption see table indicates requiring assumptions real restriction realistic set candidates even though seem artificial running time even large networks sydney algorithm dpalg needs less seconds see table shows approach applicable scenarios map designer wants adapt layout labeling interactively particular scenarios every four steps must repeated time improves computing time example applying scaling step phase step time consuming step instance need rescaled relation label size map size determined dpalg moderately slower greedyalg seconds maximum see table hand approaches ilpalg scalealg alternatives running times much worse minute maximum see table quality observe labelings created dpalg switchovers namely see table column hence long sequences consecutive labels lie side metro line see corresponding figures table column sequence together ilp based approach ilpalg yields solution longest sequences average particular switchovers placed sequences regularly sized labels single sequence mostly directed particular similarly shaped sequences labels form regular patterns desired alignment labels chosen blend alignment adjacent labels comparison solution ilpalg dpalg costs dpalg never exceed factor see table column instances even obtains solution costs instances dpalg basically dpalg spends additional costs choice single labels distance switchovers dpalg berowra colah asquith richmond east richmond hornsby clarendon waitara windsor wahroonga mulgrave vineyard warrawee normanhurst turramurra riverstone thornleigh schofields quakers hill pymble gordon pennant hills killara marayong beecroft lindfield roseville cheltenham rith gto pla err kin ary itt ill blacktown chatswood seven hills epping toongabbie carlingford pendle hill artarmon leonards eastwood telopea wentworthville wollstonecraft denistone dundas westmead rydalmere parramatta waverton west ryde north sydney camellia meadowbank rosehill harris park cti ula clif hfi eld ill sta ore museum central erskineville ore green square peters mascot sto ile ury url sto ulw ill arr ille fto hto warwick farm james town hall redfern strathfield birrong regents park nfi eld ste ill cabramatta arr martin place lidcombe olympic park berala flemington concord west auburn north strathfield vill kin wynyard guildford yennora fairfield canley vale irc rhodes clyde milsons point granville merrylands domestic liverpool tempe international casula wolli creek arncliffe banksia ell ort xle ella ills gro kin arw erw nia ort ols ills glenfield macquarie fields ingleburn rockdale minto kogarah leumeah carlton allawah campbelltown hurstville penshurst macarthur mortdale oatley como jannali lla ira ari kir sutherland loftus engadine heathcote waterfall original labeling dpalg octilinstyle layout fig fig labelings sydney contrast greedyalg yields significantly switchovers maximum switchovers dpalg see consequently many distracting switches labels one side metro line see fig although sequences consecutive labels lying side may longer maximum compared dpalg much shorter average see table column sequence several adjacent labels point opposite results distracting effects see corresponding figures altogether labelings obtained greedyalg look regular cluttered dpalg solves problems since considers metro line globally yielding optimal labeling single line observation also reflected dpalg table column shows costs computed greedyalg significantly larger costs computed dpalg particular costs positioning switchovers much worse lgreedyalg ldpalg lgreedyalg ldpalg hence better quality dpalg prevails slightly better running time greedyalg concerning computed scale factor first phase dpalg labels smaller produced scalealg factor see table seems drawback first sight smaller size provides necessary space used obtain labeling higher quality respect number placement switchovers hence solutions scalealg switchovers except shorter sequences labels lying side average dpalg see table column sequence observe wolff labelings sydney look quite similar whereas labeling less switchovers see fig applies labelings layout vienna see fig recall approach needed hours compute labeled metro map sydney since need minutes compute layout without labeling lends first apply approach gain layout apply approach construct corresponding labeling wang chi present paper approach divided two phases first phase compute layout metro map second phase create labeling fig labelings sydney original labeling presented wang chi dpalg floridsdorf neue donau handelskai dresdner heiligenstadt volksoper michelbeuern akh str spittelau messe stadion schottenring alser nestroyplatz str tra tra museumsquartier gumpendorfer eld nte orf eit eig eid eit lin hieass zin run pilgramgasse schwedenplatz volkstheater westbahnhof zie gle rathaus nga pla schottentor burggasse stadthalle leopoldau aderklaaer rennbahnweg kagraner platz kagran alte donau vic donauinsel stadtpark karlsplatz taubstummengasse platz keplerplatz reumannplatz rochusgasse schlachthausgasse erdberg gasometer enkplatz simmering tscherttegasse alterlaa erlaaer siebenhirten fig labelings vienna original labeling presented wolff dpalg layout steps formulate energy functions expressing desired objectives locally optimized figure shows metro map sydney created approach comparison fig shows layout labeling created approach labelings look quite similar approach needed see table approach needed less machine however approach guarantee labels occlusionfree labels may overlap metro lines labels may result illegible drawings fig comparison two labelings line labeling created tool presented wang chi labeling created dpalg metro maps example fig shows two labelings metro line sydney laid tool wang chi figure shows labeling created tool fig shows labeling created approach labeling wang chi several serious defects makes map hardly readable marked regions show labels overlap hence labels obscured partly labels completely covered labels example region label peters label erskinville overlap label macdonaldtown hardly viewable region contains two diagonal rows stops aligned parallel upper row visible lower row almost completely covered labels labels upper row obscure labels lower row contrast approach yields labeling label stop easily legible therefore think approach reasonable alternative labeling step wang chi approach particular think better quality approach prevails better running time wang chi approach conclusion workflow reasonable alternative improvement approaches presented wolff wang chi former case approach significantly faster contrast latter case guarantee labelings acknowledgment sincerely thank herman haverkort arlind nocaj aidan slingsby wood helpful interesting discussions references agarwal van kreveld suri label placement maximum independent set rectangles comp aspvall plass tarjan algorithm testing truth certain quantified boolean formulas inf process christensen marks shieber empirical study algorithms label placement acm cormen leiserson rivest stein introduction algorithms edition mit press fink haverkort roberts schuhmann wolff drawing metro maps using curves didimo patrignani editors graph drawing volume lncs pages springer berlin heidelberg formann wagner packing problem applications lettering maps acm sympos comput pages fowler paterson tanimoto optimal packing covering plane npcomplete inf process imhof positioning names maps cartographer pages kakoulis tollis unified approach labeling graphical features proceedings fourteenth annual symposium computational geometry scg pages new york usa acm lichtenstein planar formulae uses siam wolff drawing labeling metro maps programming ieee vis comput stott rodgers walker automatic metro map layout using multicriteria optimization ieee vis comput van goethem meulemans reimer haverkort speckmann topologically safe curved schematisation cartogr wang chi metro maps ieee vis comput wolff graph drawing cartography tamassia editor handbook graph drawing visualization chapter pages crc press labelings dpalg greedyalg scalealg ilpalg fig labelings instance dpalg greedyalg scalealg ilpalg fig labelings instance dpalg greedyalg scalealg ilpalg fig labelings instance dpalg greedyalg scalealg ilpalg fig labelings instance dpalg greedyalg scalealg ilpalg fig labelings instance dpalg greedyalg scalealg ilpalg fig labelings instance dpalg greedyalg scalealg ilpalg fig labelings instance
8
sailfish isoform quantification aug reads using lightweight algorithms rob stephen carl lane center computational biology school computer science carnegie mellon university department cell biology molecular genetics center bioinformatics computational biology university maryland rapidly become facto technique measure gene expression however time required analysis kept pace data generation introduce sailfish novel computational method quantifying abundance previously annotated rna isoforms data sailfish entirely avoids mapping reads step current methods sailfish provides quantification estimates much faster existing approaches typically faster without loss accuracy ability generate genomic transcriptomic data accelerating beyond ability process increasingly widespread use growing clinical relevance rnaseq measurements transcript abundance serve magnify divide data acquisition data analysis capabilities goal isoform quantification determine relative abundance different rna transcripts given set reads analysis data isoform quantification one computationally steps commonly first step analysis differential expression among multiple samples numerous computational challenges estimating abundance data mapping sequencing reads genome transcript sequences require substantial computational resources often leads complicated models account read bias error inference adding time spent analysis finally reads known multireads map multiple sometimes many different transcripts ambiguity resulting multireads complicates estimation relative transcript abundances existing approaches first use tools bowtie determine potential locations reads originated given read alignments accurate transcript quantification tools resolve relative abundance transcripts using procedures procedures reads first assigned transcripts assignments used estimate transcript abundances abundances used read assignments weighting potential matches proportion currently estimated relative abundances steps repeated convergence practice steps time consuming example even exploiting parallel nature problem mapping reads reasonably sized reads experiment take hours recent tools express aim reduce computational burden isoform quantification data substantially altering algorithm however even advanced approaches performing read alignment processing large number alignments result ambiguously mapping reads remains significant bottleneck fundamentally limits scalability approaches depend mapping sailfish software isoform quantification data based philosophy lightweight algorithms make frugal use data respect constant factors effectively use concurrent hardware working small units data possible sailfish avoids mapping reads entirely fig resulting large savings time space key technical tribution behind approach observation transcript coverage essential isoform quantification reliably accurately estimated using counts occurring reads results ability obtain accurate quantification estimates order magnitude faster existing approaches often minutes instead hours example data described figure sailfish times faster next fastest method providing expression estimates equal accuracy sailfish fundamental unit transcript coverage different existing approaches fragment read fundamental unit coverage working replace computationally intensive step read mapping much faster simpler process counting also avoid dependence read mapping parameters mismatches gaps significant effect runtime accuracy conventional approaches yet approach still able handle sequencing errors reads overlap erroneous bases discarded rest read processed also leads sailfish single explicit parameter length longer may result less ambiguity makes resolving origin easier may affected errors reads conversely shorter though ambiguous may robust errors reads supplementary fig effectively exploit modern hardware multiple cores reasonably large memories common many data structures represented arrays atomic integers see methods allows software concurrent possible leading approach scales well number available cpus supplementary fig additional benefits sailfish approach discussed supplementary note sailfish works two phases indexing quantification fig sailfish index built particular set reference transcripts fasta sequence file specific choice length index consists data structures make counting set reads resolving potential origin set transcripts efficient see methods important data structure index minimal perfect hash function maps index per reference choice reference transcripts quant read data perfect hash function array counts transcript transcripts per set reads unhashable estimate abundances reallocate counts based figure sailfish pipeline consists indexing phase invoked via command sailfish index quantification phase invoked via command sailfish quant sailfish index four components perfect hash function mapping transcript set unique integer number unique set transcripts array recording number times occurs reference set index mapping transcript multiset contains index mapping set transcripts appears quantification phase consists counting indexed set reads applying procedure determine estimates relative transcript abundance reference transcripts index number different transcripts two share index allows quickly index count reads also appears transcripts find pairing minimum perfect hash function atomically updateable array counts allows count even faster existing advanced hashes used jellyfish index also contains pair tables allow fast access indexed appearing specific transcript well indexed transcripts particular appears amortized constant time index depends set reference transcripts choice length needs rebuilt one factors changes quantification phase sailfish takes input index described set rnaseq reads produces estimate relative abundance transcript reference measured reads per kilobase per million mapped reads rpkm transcripts per million tpm see methods definitions measures first sailfish counts number times indexed occurs set reads owing efficient indexing means perfect hash function use counting data structure methods process efficient scalable supplementary fig sailfish applies expectationmaximization procedure determine maximum likelihood estimates relative abundance transcript conceptually procedure similar algorithm used rsem except rather fragments probabilistically assigned transcripts variant used speed convergence estimation procedure first assigns proportionally transcripts transcript potential origin particular observations attributed transcript whereas appears different transcripts occurs times set reads observations attributed potential transcript origin initial allocations used estimate expected coverage transcript methods eqn turn expected coverage values alter assignment probabilities transcripts methods eqn using basic steps building block apply acceleration method squarem substantially increases convergence rate estimation procedure modifying parameter update step based current solution path estimated distance fixed point see methods alg additionally reduce number variables need fit procedure collapsing equivalence classes two equivalent perspective algorithm occur set transcript sequences rate details available methods reduction number active variables substantially reduces computational requirements procedure example set reference transcripts estimate abundance using microarray quality control maqc data fig appear least set reads however distinct equivalence classes counts thus procedure needs optimize allocations equivalence classes instead individual reduction factor procedure converges estimated abundances corrected systematic errors due sequence composition bias transcript length using regression approach similar zheng though using random forest regression instead generalized additive model correction applied initial estimates produced rather read mapping fragment assignment stage requiring fewer variables fit bias correction examine efficiency accuracy sailfish compared rsem express cufflinks using real synthetic data accuracy real data quantified agreement expression estimates computed piece software qpcr measurements sample human brain tissue hbr fig supplementary fig universal human reference tissue uhr supplementary fig paired qpcr experiments performed part microarray quality control maqc study qpcr abundance measurements given resolution genes rather isoforms thus compare measurements abundance estimates produced software summed estimates isoforms belonging gene obtain rpkm qpcr time minutes rpkm alignment quantification ground truth fish rsem xpress fflinks sail human brain tissue pearson spearman rmse medpe synthetic fish rsem xpress fflinks sail synthetic sailfish rsem express cufflinks sailfish rsem express cufflinks figure correlation qpcr estimates gene abundance estimates sailfish results taken microarray quality control study maqc results shown human brain tissue based estimates computed using reads sra accession set transcripts used experiment curated refseq genes accession prefix correlation actual number transcript copies simulated dataset abundance estimates sailfish transcripts used experiment ensembl transcripts annotated coding feature cds total time taken method sailfish rsem express cufflinks estimate isoform abundance dataset total time taken method height corresponding bar total broken time taken perform sailfish instead measured time taken count read set time taken quantify abundance given aligned reads counts tools run mode applicable allowed use threads table gives accuracy methods datasets measured pearson spearman correlation coefficients error rmse median percentage error medpe estimate gene compare predicted abundances using correlation coefficients error rmse median percentage error medpe additional details available supplementary note figure shows speed sailfish sacrifice accuracy show sailfish accurate isoform level generated synthetic data using flux simulator allows versatile modeling various protocols see supplementary note unlike synthetic test data used previous work procedure used flux simulator based specifically generative model underlying estimation procedure sailfish remains accurate isoform level fig memory usage sailfish comparable tools using ram isoform quantification experiments reported sailfish applies idea lightweight algorithms problem isoform quantification reads achieves breakthrough terms speed eliminating necessity read mapping expression estimation pipeline improve speed process also simplify considerably eliminating burden choosing single external parameter length user size number experiments grow expect sailfish paradigm remain efficient isoform quantification memory footprint bounded size complexity target transcripts phase grows explicitly number reads counting designed effectively exploit many cpu cores sailfish free software available http methods indexing first step sailfish pipeline building index set reference transcripts given length compute index containing four components first component minimum perfect hash function set kmers contained minimum perfect hash function bijection kmers set integers sailfish uses bdz minimum perfect hash function second component index array containing count every kmers finally index contains lookup table mapping transcript multiset contains reverse lookup table mapping set transcripts appears index product reference transcripts choice thus needs recomputed either changes quantification second step sailfish pipeline quantification relative transcript abundance requires sailfish index reference transcripts well set reads first count number occurrences kmers kmers since know exactly set need counted already perfect hash function set perform counting particularly efficient manner maintain array appropriate size contains number times thus far observed sequencing reads hence contain may originate transcripts either forward reverse direction account possibilities check forward read use heuristic determine increment final array counts number appearing forward direction read greater number increment counts appearing read forward direction otherwise counts appearing read incremented array counts ties broken favor forward directed reads taking advantage atomic integers cas operation provided modern processors allows many hardware threads efficiently update value memory location without need explicit locking stream update counts parallel sustaining little resource contention apply algorithm obtain estimates relative abundance transcript define equivalence class set appear set transcripts frequency words let vector entry gives many times appears transcript equivalence class given kmers performing procedure allocate counts transcripts according set equivalence classes rather full set transcripts let denote total count originate equivalence class say transcript contains equivalence class subset multiset denote estimating abundances via algorithm algorithm algo alternates estimating fraction counts observed originates transcript estep estimating relative abundances transcripts given allocation algorithm computes fraction equivalence class total count allocated transcript equivalence class transcript value computed currently estimated relative abundance transcript allocations used algorithm compute relative abundance transcript relative abundance transcript estimated variable denotes adjusted length transcript simply length transcript nucleotides algorithm iteration one iteration procedure updates estimated allocations computes new estimates relative transcript abundance based current estimates relative transcript abundance function begin return algorithm squarem iteration updates relative abundance estimates according accelerated procedure whose update direction magnitude dynamically computed modify backtracking necessary ensure global convergence max however rather perform standard update steps perform updates according vector relative squarem procedure described algo abundance estimates standard iteration expectationmaximization procedure outlined algo detailed explanation squarem procedure proof convergence see intuitively squarem procedure builds successive steps along solution path uses approximation jacobian magnitude differences solutions determine step size update estimates according update rule line procedure capable making parameters substantially improves speed converrelatively large updates gence sailfish iterative squarem procedure repeated number steps experiments reported paper see supplementary fig bias correction bias correction procedure implemented sailfish based model introduced zheng briefly performs regression analysis set potential bias factors response variables estimated transcript abundances rpkms sailfish automatically considers transcript length content dinucleotide frequencies potential bias factors specific set features suggested zheng transcript prediction regression model represents contribution bias factors transcript estimated abundance hence regression estimates may positive negative subtracted original estimates obtain rpkms details bias correction procedure see original method used generalized additive model regression sailfish implements approach using random forest regression leverage implementations technique key idea bias correction abundance estimation rather earlier pipeline bias correction sailfish disabled command line option finally note possible include potential features like normalized coverage plots encode positional bias bias correction phase however current version sailfish implemented tested bias correction features computing rpkm tpm sailfish outputs reads per kilobase per million mapped reads rpkm transcripts per million tpm quantities predicting relative abundance different isoforms rpkm estimate commonly used ideally times rate reads observed given position tpm estimate also become somewhat common given relative transcript abundances estimated procedure described tpm transcript given tpmi let number mapped transcript rpkm given rpkmi clii final equality approximate replace computing accuracy metrics since rpkm tpm measurements relative estimates isoform abundance essential put estimated relative abundances frame reference computing validation statistics centering procedure effect correlation estimates important perform computing rmse medpe let denote isoform abundances denote estimated abundances transform estimated abundances aligning centroid abundances specifically compute abundance estimates centroid adjusted abundance estimates compute statistics simulated data simulated data generated fluxsimulator parameters listed supplementary note resulted dataset pair reads rsem express cufflinks given alignments since make special use data dataset bowtie given additional flag aligning reads transcripts maximum observed insert size simulated data tophat provided option adjust expected simulated average read files provided directly sailfish without extra information since quantification procedure used whether single paired end reads provided software comparisons comparisons express rsem provided sets aligned reads bam format reads aligned bowtie using parameters allows three mismatches per read reports alignments prepare alignment cufflinks tophat run using bowtie options allow three mismatches per read rsem express cufflinks reported times sum times required alignment via bowtie rsem express via tophat cufflinks times required quantification time required method decomposed times alignment quantification steps fig choice software options effect runtime expression estimation software including rsem express cufflinks provides myriad program options user allow various desiderata example total time required tophat cufflinks lower cufflinks run without bias correction opposed bias correction data however without bias correction cufflinks yields slightly lower accuracy pearson spearman methods still taking times longer run sailfish similarly although aligned reads streamed directly express via bowtie empirically observed lower overall runtimes aligning reads quantifying expressions separately serial times reported also found synthetic data correlations produced express improved pearson spearman running procedure extra rounds however also greatly increased runtime estimation step respectively general attempted run piece software options would common standard usage scenario however despite inherent difficulty comparing set tools parameterized array potential options core thesis sailfish provide accurate expression estimates much faster existing tool remains true fastest performing alternatives even sacrificing accuracy speed order magnitude slower sailfish sailfish version used experiments analyses performed kmer size bias correction enabled experiments involving real simulated data rpkm values reported sailfish used transcript abundance estimates rsem version used default parameters apart provided alignment file bam experiments rpkm values reported rsem used abundance estimates express version used experiments run default parameters macq data without bias correction synthetic data abundance estimates taken fpkm values output express cufflinks version used experiments run bias correction recovery macq data recovery synthetic data fpkm values output cufflinks used transcript abundance estimates experiments run computer amd opterontm processors cores ram experiments wall time measured using bash time command implementation sailfish sailfish two basic subcommands index quant index command initially builds hash set reference transcripts using fish software hash used build minimum perfect hash count array tables described index command takes input size via option set reference transcripts fasta format via parameter produces sailfish index described optionally take advantage multiple threads target number threads provided via option quant subcommand estimates relative abundance transcripts given set reads quant command takes input sailfish index computed via index command described provided via parameter additionally requires set reads provided list fasta fastq files given parameter finally index command quant command take advantage multiple processors target number provided via option sailfish implemented takes advantage several language library features particular sailfish makes heavy use atomic data types parallelization across multiple threads sailfish accomplished via combination standard library thread facilities intel threading building blocks tbb library sailfish available program license developed tested linux macintosh author contributions designed method algorithms devised experiments wrote manuscript implemented sailfish software acknowledgments work partially funded national science foundation national institutes health received support alfred sloan research fellow references botelho pagh ziviani simple minimal perfect hash functions algorithms data structures pages springer flicek ahmed amode barrell beal brent clapham coates fairley ensembl nucleic acids research griebel zacher ribeca raineri lacroix sammeth modelling simulating generic experiments flux simulator nucleic acids research langmead trapnell pop salzberg ultrafast alignment short dna sequences human genome genome biology dewey rsem accurate transcript quantification data without reference genome bmc bioinformatics kingsford fast approach efficient parallel counting occurrences bioinformatics mortazavi williams mccue schaeffer wold mapping quantifying mammalian transcriptomes nature methods pheatt intel threading building blocks journal computing sciences colleges apr pruitt tatusova brown maglott ncbi reference sequences refseq current status new features genome annotation policy nucleic acids research roberts pachter streaming fragment assignment analysis sequencing experiments nature methods roychowdhury iyer robinson lonigro cao kalyanasundaram sam balbin quist barrette everett siddiqui kunju navone araujo troncoso logothetis innis smith lao kim roberts gruber pienta talpaz chinnaiyan personalized oncology integrative sequencing pilot study science translational medicine shi reid jones shippy warrington baker collins longueville kawasaki lee microarray quality control maqc project shows intraplatform reproducibility gene expression measurements nature biotechnology soneson delorenzi comparison methods differential expression analysis data bmc bioinformatics trapnell pachter salzberg tophat discovering splice junctions rnaseq bioinformatics trapnell williams pertea mortazavi kwan van baren salzberg wold pachter transcript assembly quantification reveals unannotated transcripts isoform switching cell differentiation nature biotechnology varadhan roland simple globally convergent methods accelerating convergence algorithm scandinavian journal statistics wagner kin lynch measurement mrna abundance using data rpkm measure inconsistent among samples theory biosciences zheng chung zhao bias detection correction data bmc bioinformatics sailfish isoform quantification reads using lightweight algorithms rob patro stephen carl kingsford supplementary figure effect length retained data ambiguity ratio count total mapped possible unique length supplementary figure length varied range processing synthetic dataset observe longer length results slight decrease data retention denoted red line shows ratio number read set hashable total number appearing set reads simultaneously observe ratio number unique unique locus origin set transcripts total number set transcripts blue line increases make larger seems expected choice larger resulting less robustness sequencing error higher fraction unique smaller providing robustness errors data cost increased ambiguity however since differences relatively small reasonably large range expect inference procedure fairly robust parameter use experiments default sailfish however attempt optimize parameter performing experiments supplementary figure speed counting indexed supplementary figure time count quantify transcript abundance read dataset function number concurrent hashing threads even single thread counts dataset processed minutes seconds processing threads counted minute seconds supplementary note additional benefits sailfish approach additional benefit lightweight approach size indexing counting structures required sailfish small fraction size indexing alignment files required methods example maqc dataset described figure total size indexing count files required sailfish quantification compared much larger indexes accompanying alignment files bam format used approaches index alignment file produced bowtie unlike alignment files grow number reads sailfish index files grow number unique complexity transcriptome composition independent number reads supplementary figure correlation plots qpcr human brain tissue synthetic data rsem cufflinks rpkm rpkm express qpcr qpcr ground truth ground truth qpcr ground truth supplementary figure correlation plots rsem express cufflinks data presented fig column labeled method whose output used produce column plots top row plots show correlation computed rpkm expression estimates human brain tissue bottom row plots shows correlation computed rpkm true abundance transcript synthetic dataset generate results shown express run using default streaming expression estimation algorithm reported methods additional batch iterations improve express accuracy come cost substantial increase runtime supplementary figure correlation qpcr universal human reference tissue rpkm sailfish rsem express cufflinks qpcr qpcr qpcr qpcr alignment quantification time minutes fish sail rse ress exp flink cuf sailfish rsem express cufflinks pearson spearman rmse medpe supplementary figure accuracy four methods second dataset macq study reads experiment taken sra accession reads mixture different tissues universal human reference uhr set reference transcripts used fig main text relative accuracy performance methods similar observed macq dataset sailfish express cufflinks achieving comparable accuracy slightly accurate rsem sailfish times faster cufflinks closest method terms speed supplementary note additional details accuracy analysis compare predicted abundances using correlation coefficients pearson spearman error rmse median percentage error medpe metrics allow gauge accuracy methods different perspectives example pearson correlation coefficient measures well trends true data captured methods correlation taken log scale discounts transcripts zero low abundance either sample rmse includes transcripts true estimated abundance zero express cufflinks produced outlier transcripts low estimated abundance significantly degraded pearson correlation measure discarded outliers filtering output methods setting zero estimated rpkm less equal cutoff chosen removed outliers seem discard truly expressed transcripts supplementary note parameters simulated data simulated data generated fluxsimulator following parameters expression nan nan fragmentation rna reverse transcription rtranscription yes yes amplification nan filtering sequencing yes fasta yes yes supplementary figure convergence relative abundance estimates iteration supplementary figure average difference relative abundance estimated two successive applications step algo lines versus iterations squarem algorithm universal human reference tissue experiment see residual drops quickly appears converged iterations squarem procedure performed
5
multiscale model reduction shale gas transport fractured media akkutlu yalchin maria jul july abstract paper develop multiscale model reduction technique describes shale gas transport fractured media due heterogeneities processes use upscaled models describe matrix follow previous work derived upscaled model form generalized nonlinear diffusion model describe effects kerogen model interaction matrix fractures use generalized multiscale finite element method approach matrix fracture interaction modeled via local multiscale basis functions developed gmsfem applied linear flows horizontal vertical fracture orientations cartesian fine grid paper consider arbitrary fracture orientations use triangular fine grid developed gmsfem nonlinear flows moreover develop online basis function strategies adaptively improve convergence number multiscale basis functions coarse region represents degrees freedom needed achieve certain error threshold approach adaptive sense multiscale basis functions added regions interest numerical results problem presented demonstrate efficiency proposed approach introduction shale gas transport active area research due growing interest producing natural gas source rocks shale systems added complexities due presence organic matter known kerogen kerogen brings new fluid storage transport qualities shale number authors loucks sondergeld ambrose previously discussed physical properties kerogen using scanning electron microscopy sem showed nanoporous kerogen microporous conventional inorganic rock materials gas transport kerogen typically develops low reynolds number relatively high knudsen number values conditions expected transport driven laminar darcy flow dominantly instead pore diffusion molecular transport mechanisms knudsen diffusion adsorbed phase surface diffusion latter introduces nonlinear processes pore scale occur heterogeneous pore geometry types upscaled models needed represent complex processes reservoir simulations simulations complex transport needs coupled transport fractures brings additional difficulty multiscale simulations particular multiscale simulations processes describing interaction fracture matrix require model approaches work problems without scale separation high contrast objective paper department petroleum engineering texas university college station mathematics texas university college station department computational technologies institute mathematics informatics federal university yakutsk republic sakha yakutia russia institute scientific computation texas university college station department discuss development approaches describing fracture matrix interaction taking upscaled matrix model following previous work previous work proposed set macroscopic models take account nanoporous nature nonlinear processes shale matrix derivation uses multiple scale asymptotic analysis applied mass balance equations equation state free gas isotherm adsorption microscopic description largely based model formulated akkutlu fathi macroscopic parameters appear equations require solutions cell problem defined representative volume elements rves rve problems take account variations average effects macro scale multiscale approaches proposed limited representing features scale separation represent fracture network interaction fracture network matrix present multiscale approach following framework generalized multiscale finite element method gmsfem main idea gmsfem use multiscale basis functions extract essential information coarse grid computational grid develop model developed gmsfem applied linear flows horizontal vertical fracture orientations cartesian fine grid paper contributions use arbitrary fracture orientations use triangular fine grids development gmsfem nonlinear flows development online basis function strategies adaptively improve convergence represent fractures fine grid use discrete fracture model dfm fine grid constructed resolve fractures coarse grid choose rectangular grid gmsfem framework uses models computing snapshot space offline space nonlinear models handled gmsfem locally updating multiscale basis functions study flows fractured media long history modeling techniques fine grid include discrete fracture model dfm embedded fracture model efm singlepermeability model models hierarchical fracture models though approaches designed simulations number approaches represent fractures macroscopic level example models represent network connected fractures macroscopically introducing several permeabilities block efm models interaction fractures blocks separately block main idea hierarchical fracture modeling presented homogenize fractures length smaller coarse block represent fractures approaches generalized incorporating interaction fractures permeability heterogeneities locally lead efficient upscaling techniques recent papers several multiscale approaches proposed representing fracture effects approaches share common concepts methods discuss sense add new degrees freedom represent fractures coarse grid main difference approaches use local spectral problems accompanied adaptivity detect regions add new basis functions regard procedure finding multiscale basis functions enrichment procedure different existing techniques proposed method constructs multiscale basis functions appropriately selecting local snapshot space local spectral problems underlying nonlinear problem local spectral problems allow adaptively enrich regions larger errors paper discuss adaptivity issues add multiscale basis functions selected regions reduce computational cost associated constructing snapshot space follow use randomized boundary conditions one novel components paper use online basis functions see online basis functions steady state problems nonlinear problems online basis functions constructed simulation using residual reduce error significantly basis functions used offline basis functions reduce error desired threshold present numerical results representative examples examples use nonlinear matrix fracture models numerical results show models fewer degrees freedom used get accurate approximation solution particular degrees freedom needed obtain accurate representation solution also add geomechanical contribution permeability term permeability depends pressure furthermore demonstrate use online basis functions reduce error paper organized follows next section present model problem section discuss model section devoted development gmsfem particular offline spaces section present numerical results offline basis functions section discuss randomized snapshot spaces show use give similar accuracy less computational cost section develop online basis functions present numerical results model problem paper study nonlinear gas transport fractured media motivated several applications including shale gas interested shale gas transport described similar equations arise models one considers free gas tight reservoirs consider general equations div amount free gas contain terms related storage adsorption coefficients authors consider nonlinear terms forms parameter unity kerogen equal vgrain inorganic material vgrain grain volume vgrain kerogen grain volume diffusivity porosity defined free fluid inorganic matrix kerogen follows kerogen kerogen inorganic matrix inorganic matrix free gas ideal gas assumption darcy law free gas flow inorganic matrix used permeability gas viscosity sorbed gas use langmuir henry isotherms authors discuss general framework equations also include nonlinear diffusivity due adsorbed gas shale formation nonlinear terms appear due barotropic effects nonlinear flows also contain components due diffusion fractures one needs additional equations modeling fractures fractures high conductivity use general equation form div describe flow within fractures authors use fracture porosity permeability problems solved fine grid using dfm described section many shale gas examples matrix heterogeneities upscaled resulting upscaled equation form however interaction matrix fractures require type multiscale modeling approach effects fractures need captured accurately approaches multicontinuum often used approaches use idealized assumptions fracture distributions paper use multiscale basis functions represent fracture effects previous work considered similar approaches flow fractures could horizontal vertical aligned cartesian grid paper consider arbitrary fracture distribution context nonlinear flow equations overall model equations solved coarse grid next introduce concepts fine coarse grids let usual conforming partition computational domain finite elements triangles quadrilaterals tetrahedra refer partition coarse grid assume coarse element partitioned connected union fine grid blocks fine grid partition denoted definition refinement coarse grid use denotes number coarse nodes denote vertices coarse mesh define neighborhood node see figure illustration neighborhoods elements subordinated coarse discretization figure illustration coarse neighborhood coarse element emphasize use denote coarse neighborhood denote coarse element throughout paper discretization discretize system fine grid use finite element method use dfm fractures solve problem using finite element method fem need fine grid discretization capture fractures computations expensive apply discrete fracture network dfm model modeling flows fractures model aperture fracture appears factor front one dimensional integral consistency integral form main idea model applied complex configuration fractured porous media demonstrate consider problem equation simplify fractures lines small aperture thus element needed describe fractures discretefracture model system equations discretized form matrix form fractures whole domain represented represent matrix fracture permeability field respectively aperture fracture index fractures note domain domain figure equations written follows test function figure fine grid fractures solve first linearize system use following linearization standard finite difference scheme used approximation time step size superscripts denote previous current time levels time unconditionally stable linearization pnf standard galerkin finite element method write solution standard linear element basis functions defined denotes number nodes fine grid equation presented matrix form mass matrix given mij stiffness matrix given aij hence time step following linear problem fine scale discretization yields large matrices size discretization using gmsfem offline spaces use multiscale basis functions represent solution space consider continuous galerkin formulation signify support basis functions denote basis functions supported index represents numbering basis functions turn solution sought cms cik basis functions identified global coupling given variational form cms voff voff used denote space spanned basis functions let conforming finite element space respect partition assume solution satisfying next describe gmsfem gmsfem consists offline online stage offline stage construct multiscale basis functions online stage solve problem input parameters right hand sides boundary conditions offline computations step coarse grid generation step construction snapshot space used compute offline space step construction small dimensional offline space performing dimension reduction space local snapshots given computational domain coarse grid constructed local problems solved coarse neighborhoods obtain snapshot spaces smaller dimensional offline spaces obtained snapshot spaces dimension reduction via spectral problems solve problem constructed offline space moreover construct online basis functions problem dependent computed locally based local residuals present construction offline basis functions corresponding spectral problems obtaining space reduction offline computation first construct snapshot space vsnap snapshot space space basis functions solutions local problems various choices boundary conditions example use following extensions form snapshot space function defined denotes boundary node simplicity omit index given piecewise linear function defined generic coarse element define snap following variational problem snap snap snap snap brevity notation omit superscript yet assumed throughout section offline space computations localized respective coarse subdomains let number functions snapshot space region vsnap span coarse subdomain denote rsnap order construct offline space voff perform dimension reduction snapshot space using auxiliary spectral decomposition analysis motivates following eigenvalue problem space snapshots aoff aoff snap soff snap snap snap rsnap arsnap snap snap rsnap srsnap denote analogous fine scale matrices defined aij sij basis function generate offline space choose smallest moff eigenvalues form corresponding eigenvectors space snapshots setting pli snap moff coordinates vector next create appropriate solution space variational formulation continuous galerkin approximation begin initial coarse space span recall denotes number coarse neighborhoods standard multiscale partition unity functions defined continuous function linear edge multiply partition unity functions eigenfunctions offline space voff construct resulting basis functions moff moff denotes number offline eigenvectors chosen coarse node note construction yields continuous basis functions due multiplication offline eigenvectors initial continuous partition unity next define continuous galerkin spectral multiscale space voff span moff using single index notation may write voff span moff denotes total number basis functions space voff also construct operator matrix used denote nodal values basis function defined fine grid seek cms voff cms voff note variational form yields following linear algebraic system denotes nodal values discrete solution also note operator matrix may analogously used order project coarse scale solutions onto fine grid simulations presented next update basis functions discuss basis function update section numerical result present numerical results solution using offline basis functions basis functions offline space constructed following procedure described note basis functions constructed initial time used generating stiffness matrix right hand side consider solution problem constant nonlinear coefficients constant coefficients see previous section representing matrix fracture properties use following nonlinear coefficients use fractures permeability use constant model see sorbed gas use langmuir model figure coarse fine grids coarse grid contains cells facets vertices fine grid contains cells facets vertices figure coarse fine grids coarse grid contains cells facets vertices fine grid contains cells facets vertices equation solved dirichlet boundary condition left boundary neumann boundary conditions boundaries domain length meters directions calculate concentration tmax years time step days initial condition use numerical solution construct structured two coarse grids nodes figure nodes figure fine grids use unstructured grids resolves existing fractures figure solution constant coefficients coarse top fine bottom grids year top bottom figure show pressure distribution three concrete time level years figure top bottom solutions year case nonlinear permeability figure top bottom solutions year case nonlinear coefficients pressure concentration following relationship pressure distribution nonlinear coefficients presented figures last time level figures show reference multiscale solutions solution obtained offline space dimension using mof multiscale basis functions per coarse neighborhood solution obtained space dimension compared solution left solution right figures observe gmsfem approximate solution accurately compare results use relative weighted errors using weighted norms defined figure multiscale basis functions corresponding first smallest eigenvalues case constant properties multiplication partition unity functions left right table present relative errors percentage last time level constant fracture matrix properties using coarse grids nodes approximation vary dimension spaces selecting certain number offline basis functions mof corresponding smallest eigenvalues table recall vof denotes offline space dim vof offline space dimension mof number multiscale basis functions per coarse neighborhood use similar number mof cms multiscale reference solutions respectively figure presents multiscale basis functions corresponding first smallest eigenvalues case constant properties offline basis functions multiplied partition unity functions use mof case coarse nodes relative weighted errors respectively final time level dimension corresponding offline space reference solution coarse grid nodes relative errors slightly smaller weighted errors respectively dimension corresponding offline space reference solution relative errors different time instants cases coarse grids presented figures observe take basis functions per coarse node relative errors remain small mof dim vof mof dim vof table numerical results relative errors final time level left case coarse nodes right case coarse nodes present relative weighted errors tables different number eigenvectors mof case nonlinear coefficients consider case coarse nodes use mof case relative errors respectively dimension corresponding offline space reference solution case relative errors respectively dimension coarse spaces corresponding number eigenvectors mof figure relative weighted errors coarse grid figure nodes constant properties figure relative weighted errors coarse grid figure nodes constant properties observe dimension coarse space number selected eigenvectors mof increases respective relative errors decrease also similar error behaviour case constant matrixfracture coefficients moreover see decrease relative error fast initially one obtain small errors using basis functions mof dim vof mof dim vof table numerical results relative weighted errors final time level case left case coarse nodes right case coarse nodes mof dim vof mof dim vof table numerical results relative weighted errors final time level case left case coarse nodes right case coarse nodes remark numerical simulations use empricial interpolation procedures approximating nonlinear functionals see details approaches empirical interpolation concepts used evaluate nonlinear functions dividing computation nonlinear function coarse regions evaluating contributions nonlinear functions coarse region taking advantage representation solution using approaches reduce computational cost associated evaluating nonlinear functions consequently making computational cost independent fine grid randomized oversampling gmsfem next present numerical results oversampling randomized snapshots substantially save computational cost snapshot calculations algorithm instead solving local harmonic problems fine grid node boundary solve small number harmonic extension local problems random boundary conditions precisely let rsnap independent identical distributed standard gaussian random vectors fine grid nodes boundary use randomized snapshots generate fraction snapshot vectors using random boundary conditions snapshot space calculations use extended coarse grid neighborhood width layer example means coarse grid neighborhood plus layer adjacent fine grid see figure illustration calculations oversampled neighborhood domain reduces effects due artificial oscillation random boundary conditions numerical results simulation results presented tables node coarse grid case use constant matrixfracture properties see present results randomized snapshot case last time level simulations set oversampling size use different numbers multiscale basis functions mof table investigate effects oversampling increase number fine grid extensions see oversampling helps improve results initially improvements slow larger oversampling domains give significant improvement solution accuracy use snapshot ratio standard number snapshots randomized algorithm relative weighted errors full snapshots randomized snapshots observe randomized algorithm give similar errors full snapshots figure neighborhood domain oversampling coarse grid nodes table shows relative errors different number randomized snapshots oversampled region chosen oversampled region contains extra cell layers around numerical results show one achieve similar accuracy using fraction snapshots randomized algorithms thus provide substantial cpu savings residual based adaptive online gmsfem section consider construction online basis functions used regions adaptively reduce error significantly follow earlier works done linear timeindependent problems online basis functions constructed based residual take account distant effects construction online basis functions motivated analysis using offline computation construct multiscale basis functions used input parameters solve problem coarse grid fast convergence due adding online basis functions depends offline space important offline space contains essential features solution space numerical simulations demonstrate sufficient number offline basis functions achieve rapid convergence proposed online procedure first derive error indicator error cnms problem energy norm furthermore use error indicator develop enrichment algorithm error indicator gives estimate local error coarse grid region add basis functions improve solution assume finite element space find solution solve multiscale solution voff cms voff define linear functional time level cms cms full snapshots randomized snapshots without oversampling oversampling oversampling oversampling mof table randomized oversampling gmsfem number snapshots constant matrixfracture properties every coarse mesh nodes relative errors final time level mof table randomized oversampling gmsfem different number snapshots every constant properties coarse mesh nodes relative errors final time level let coarse region cms cms cnms represent matrix fracture solution time level solution elliptic problem form cms cnms use following notation error estimators spatial discretization error take account dependence elliptic problem time step parameter use norm define projection vof span projection defined projection first terms spectral expansion terms eigenfunctions following problem note spectral problem different original one formulated however involves similar terms energy norms norms let cnms error time level using right hand side written follows cms cms cms cms cms cms cms cms rin therefore mini finally take inequality residuals give computable indicator error norm remark note analysis suggests use local eigenvalue problem eigenvalue problem slightly different used earlier numerical simulations show use improves convergence offline online procedures slightly numerical examples use spectral problems based numerical simulations independent time stepping next consider online basis construction use index represent enrichment level enrichment level use vms denote corresponding space contains offline online basis functions consider strategy getting space vms vms online basis functions mean basis functions computed iterative process contrary offline basis functions computed iterative process online basis functions computed based local residuals current multiscale solution cms let vms vms new approximate space constructed adding online basis coarse neighborhood cms vms corresponding gmsfem solution define csemi csemi cms satisfies semi cms error csemi csemi cms csemi cms cms csemi csemi cms csemi cms csemi cms csemi cms cms csemi csemi semi cms cms csemi semi cms cms csemi using obtain solution cms semi cms semi satisfies semi csemi taking cms vms semi semi semi semi cms last two terms inequality measure amount reduction error new basis function added space vms solution esemi cms semi cms semi cms enhance convergence efficiency online adaptive gmsfem consider enrichment nonoverlapping coarse neighbothoods let index set coarse neighborhoods define vms vms span obtain finally combine obtain semi psemi semi find online basis functions maximize local resudial rin current time level moreover required solution local residual defined using wms cms cms according riez representation theorem solution time level iteratively enrich offline space residual based online basis function basis functions calculated using equation zero dirichlet boundary conditions residual norm provides measure amount reduction energy error construction adaptive online basis functions first choose coarse neighborhood find online basis using equation compute norm local residuals calculate kri kri arrange descending order choose smallest implies coarse neighborhood add corresponding online basis original space vms numerical results next present numerical results residual based online basis functions consider similar problem previous section constant properties iteratively enrich offline space online residual basis functions selected time steps coarse fine grid setups section matrix needed add online basis function add selected time steps note add new online basis functions based current residuals remove previously calculated online basis function keep till update online basis functions save computational time small size coarse scale problem table present errors consider three different cases first case call case online basis functions added first time step every time step second case call case online basis functions replaced first five consecutive time steps online basis functions updated every time step third case call case online basis functions replaced first ten consecutive time steps online basis functions updated every time step updates initially helps reduce error due initial condition mentioned offline space important convergence present results different number initial offline basis functions per coarse neighborhood use multiscale basis functions offline space initial basis functions table show errors online basis functions replaced first five consecutive time steps case afterwards online basis functions updated time step calculations use tmax years days calculations performed coarse grid nodes case constant properties observe table following facts choosing initial offline basis functions improves convergence substantially indicates choice initial offline space important adding online basis functions less frequently every time step provides accurate approximation solution indicates online basis functions added selected time steps next would like show one use online basis functions adaptively use adaptivity criteria discussed table present results residual based online basis functions adaptivity figure show errors time observe applying adaptive algorithm much reduce errors figure fine scale solution right using offline basis functions middle two online iteration time levels left year top bottom constant coefficients solution size problem offline basis functions two online iteration conclusions paper present multiscale approach shale transport fractured media approach uses upscaled model form nonlinear parabolic equations represent matrix consists organic inorganic matter nonlinearities equation due interaction organic inorganic matter interaction nonlinear matrix fracture represented multiscale basis functions follow generalized multiscale finite element method extract leading order terms represent matrix fracture interaction multiscale basis functions constructed locally coarse region represent interaction upscaled matrix fracture network show proposed approach effectively capture effects overall system modeled using fewer degrees freedom numerical results presented cases regions offline procedure insufficient give accurate representations solution due fact offline dof iter mof mof mof dof iter mof mof mof dof iter mof mof mof table convergence history using one two four offline basis functions mof add online basis functions every time step first steps cases left middle right dof last time step dof iter mof mof mof dof iter mof mof mof dof iter mof mof mof table convergence history using one two four offline basis functions mof add online basis functions every time step first steps left middle right dof last time step computations typically performed locally global information missing offline information phenomena occur locally regions identified using proposed error indicators need develop online basis functions discuss online basis functions show procedure converges fast acknowledgements grateful tat leung helpful discussions suggestions regarding online basis constructions work partially supported russian science foundation grant dof iter iter mof mof mof dof iter mof mof mof table convergence history using one two four offline basis functions mof add online basis functions every time step first steps left without space adaptivity right space adaptivity dof last time step references yucel akkutlu yalchin efendiev viktoria savatorova asymptotic analysis gas transport shale matrix transport porous media yucel akkutlu ebrahim fathi multiscale gas transport shales local kerogen heterogeneities spe journal raymond ambrose robert hartman mery yucel akkutlu carl sondergeld shale calculations part new considerations spe journal baca arnett langford modeling fluid flow fractured porous rock masses finite element techniques int calo efendiev galvis randomized oversampling generalized multiscale finite element methods http victor calo yalchin efendiev juan galvis mehdi ghommem multiscale empirical interpolation solving nonlinear pdes journal computational physics chaturantabut sorensen application pod deim dimension reduction nonlinear miscible viscous fingering porous media mathematical computer modeling dynamical systems eric chung yalchin efendiev wing tat leung online generalized multiscale finite element methods arxiv preprint chung efendiev adaptive gmsfem flow problems journal computational physics peter dietrich rainer helmig martin sauter heinz georg teutsch flow transport fractured porous media springer science business media durlofsky numerical calculation equivalent grid bock permeability tensors heterogeneous porous media water resources research figure dynamic relative left right weighted errors coarse grid figure nodes case constant coefficients weighted errors using offline basis functions online basis functions without adaptivity top offline basis function middle offline basis functions bottom offline basis functions efendiev galvis gildin multiscale model reduction flows highly heterogeneous media journal computational physivs efendiev galvis hou generalized multiscale finite element methods journal computational physics efendiev galvis presho generalized multiscale finite element methods oversampling strategies international journal multiscale computational engineering accepted efendiev galvis multiscale finite element methods problems using local spectral basis functions journal computational physics efendiev hou ginting multiscale finite element methods nonlinear problems applications comm math yalchin efendiev seong lee guanglian jun yao zhang hierarchical multiscale modeling flows fractured media using generalized multiscale finite element method arxiv preprint appear international journal geomathematics doi anthony gangi variation whole fractured porous rock permeability confining pressure international journal rock mechanics mining sciences geomechanics abstracts volume pages elsevier gong durlofsky upscaling discrete fracture characterizations dualporosity models efficient simulation flow strong gravitational effects spe hajibeygi karvounis jenny loosely coupled hierarchical fracture model iterative multiscale finite volume method society petroleum engineers hadi hajibeygi dimitris karvounis patrick jenny loosely coupled hierarchical fracture model iterative multiscale finite volume method spe reservoir simulation symposium society petroleum engineers lee jensen lunati modeling simulation shale gas production formations ecmor european conference mathematics oil recovery lee lough jensen hierarchical modeling flow naturally fractured formations multiple length scales water resources research jianfang cong wang didier ding yuan generalized framework model simulation gas production unconventional gas reservoirs paper spe presented spe reservoir simulation symposium woodlands texas usa pages lee efficient simulation black oil naturally fractured reservoir discrete fracture networks homogenized media spe reservoir evaluation engineering robert loucks robert reed stephen ruppel daniel jarvie morphology genesis distribution pores siliceous mudstones mississippian barnett shale journal sedimentary research lough lee kamath new method calculate effective permeability gridblocks used simulation naturally fractured reservoirs spe firoozabadi numerical simulation water injection fractured media using model spe ree noorishad mehran upstream finite element method solution transient transport equation fractured porous media water resour volker reichenberger hartmut jakobs peter bastian rainer helmig finite volume method flow fractured porous media advances water resources carl sondergeld raymond joseph ambrose chandra shekhar rai jason moncrieff microstructural studies gas shales spe unconventional gas conference society petroleum engineers asana wasaki yucel akkutlu permeability shale spe annual technical conference exhibition society petroleum engineers yuan zhijiang kang perapon fakcharoenphol model simulating multiphase flow naturally fractured vuggy reservoirs journal petroleum science engineering qin ewing efendiev kang ren approach modeling multiphase flow naturally fractured vuggy petroleum reservoirs presented spe international oil gas conference exhibition china held beijing china jun yao hai sun fan wang sun numerical simulation gas transport mechanisms tight shale gas reservoirs petroleum science
5
nov counting problems graph products relatively hyperbolic groups ilya gekhtman samuel taylor giulio tiozzo abstract study properties generic elements groups isometries hyperbolic spaces general combinatorial conditions prove loxodromic elements generic full density respect counting balls word metric translation length grows linearly provide applications large class relatively hyperbolic groups graph products including artin groups rightangled coxeter groups introduction let finitely generated group one learn great deal geometric algebraic structure studying actions various negatively curved spaces indeed gromov theory hyperbolic groups provides clearest illustration philosophy however weaker forms negative curvature ranging relative hyperbolicity acylindrical hyperbolicity apply much larger classes groups still provide rather strong consequences theories special role played loxodromic elements action elements act dynamics paper interested quantifying abundance isometries action hyperbolic space emphasize simplest situations natural hyperbolic spaces arise locally compact includes actions associated relatively hyperbolic groups cubulated groups mapping class groups name hence paper make assumptions local finiteness discreteness action suppose action isometries hyperbolic space address question typical element act amenable word typical well defined meaning depends heavily averaging procedure family finitely supported measures exhausting although much known measures generated random walk little known counting respect balls word metric main focus precise terms fix finite generating set group let ball radius respect word metric determined call property generic language refinement questions asks loxodromic elements particular action generic respect generating set important date november gekhtman taylor tiozzo note genericity counting model depends generating set priori sets may generic respect one word metric respect another results paper modeled previous work studied situation hyperbolic recall loxodromic respect action translation length lim strictly positive prove isometric action hyperbolic group hyperbolic metric space loxodromic elements generic translation length grows linearly however genericity loxodromic elements general false hypothesis hyperbolic dropped see example present paper generalize theorem much larger class groups general setup discussed sample theorem suppose either finitely generated group admits geometrically finite action cat space virtually abelian parabolic subgroups admissible generating set artin coxeter group split direct product standard vertex generating set nonelementary isometric action separable hyperbolic metric space particular loxodromic elements generic fact theorem applies general class relatively hyperbolic groups graph products see section precise statements definitions fact group satisfying certain combinatorial conditions moving general framework state one result may independent interest direct generalization theorem maucourant consider case hyperbolic theorem let theorem generating set suppose infinite index subgroup lim proportion elements length less lie goes general framework results general framework follows define graph structure pair countable group directed finite graph labeled vertex called initial vertex every vertex exists directed path every edge labeled group element edges directed fixed vertex distinct labels exists evaluation map map extends set finite paths concatenating edge labels denote set finite paths starting set paths length cardinality counting loxodromics graph structure geodesic combing evaluation map bijective path evaluates geodesic associated cayley graph see section details set introduce counting measure graph structure almost semisimple number paths length starting pure exponential growth exists see section details definition vertex denote set loops based image call loop semigroup associated consider action hyperbolic metric space semigroup nonelementary contains two independent loxodromics graph structure nonelementary action vertex maximal growth see definition loop semigroup nonelementary introduce several criteria graph structure guarantee nonelementary call thickness quasitightness definition thickness graph structure thick vertex maximal growth exists finite set generally graph structure thick relatively subgroup every vertex maximal growth exists finite set given path say contains element contains subpath denote set paths starting initial vertex contain following definition modeled one found definition growth quasitightness graph structure called growth quasitight exists every set density zero respect generally given subgroup say growth quasitight relative exists constant every set density zero general form main theorem going prove following theorem let countable group isometries separable metric space let almost semisimple graph structure either nonelementary thick relative nonelementary subgroup growth quasitight relative nonelementary subgroup exists every one gekhtman taylor tiozzo displacement grows linearly translation length grows linearly iii consequence loxodromic elements generic loxodromic interested counting respect balls cayley graph get following immediate consequence corollary let group finite generating set suppose geodesic combing pure exponential growth respect iii combing satisfies least one conditions nonelementary action hyperbolic space set loxodromic elements generic respect note artin group example standard generators geodesic combing pure exponential growth loxodromic elements generic additional dynamical condition must added fact show graph products raags racgs condition amounts essentially group product moreover prove three conditions related namely applications hyperbolic groups cannon theorem hyperbolic group admits geodesic combing generating set fact language recognized graph defined choosing smallest word lexicographic order among words minimal length represent called shortlex representative proved graph structure nonelementary hence apply theorem raags racgs graph products let right angled artin coxeter group let standard vertex generating set result hermiller meier implies shortlex automatic language admits geodesic combing however graph parameterizing language geodesics correct dynamical properties needed apply theorem section modify construction show direct product graph structure respect standard generators strongest possible dynamical properties obtain following theorem let artin coxeter group virtually cyclic split product consider action hyperbolic separable metric space holds loxodromic elements generic respect standard generators counting loxodromics actually theorem applies graph products groups geodesic combing theorem refer reader section details let point raags products give fact examples actions loxodromics generic example nongenericity general denote free group rank fix free basis generating set let let denote cayley graph give standard generating set generating set consisting basis basis consider action factor acts left multiplication right factor acts trivially denote set loxodromics action lox lox lim note example pure exponential growth geodesic combing two conditions sufficient yield genericity loxodromics moreover complement lox subgroup infinite index positive density showing conditions needed also theorem moreover consequence geodesic combing produce order prove previous theorem also prove following fine counting statement number elements ball respect standard generating set far know result also new may independent interest theorem let artin group coxeter group virtually cyclic split product exists lim say group generating set previous property exact exponential growth stronger pure exponential growth one requires depends subtly choice generating set fact theorem establish result also generally graph products relatively hyperbolic groups results also apply large class relatively hyperbolic groups need two hypotheses first recall relatively hyperbolic group equipped compact metric space known bowditch boundary space carries natural measure defined respect word metric cay see section call relatively hyperbolic group generating set pleasant action ergodic respect measure second need geodesic combing respect generating set let call finite generating set admissible admits geodesic combing respect following general statement theorem let relatively hyperbolic group admissible generating set pleasant action hyperbolic separable metric space exists consequence elements generic gekhtman taylor tiozzo fact many relatively hyperbolic groups admit geodesic combings follows let call finitely generated group geodesically completable finite generating set extended finite generating set exists geodesic biautomatic structure ciobanu theorem proved whenever hyperbolic relative collection subgroups geodesically completable geodesically completable moreover automata theory theorem one gets admits geodesic biautomatic structure also admits geodesic combing hence one gets proposition let relatively hyperbolic group parabolic subgroup geodesically completable every finite generating set extended finite generating set admits geodesic combing let note particular virtually abelian groups geodesically completable proposition hence group hyperbolic relative collection virtually abelian subgroups geodesically completable admits geodesic combing moreover prove proposition proposition group acts geometrically finitely cat proper metric space pleasant respect finite generating set particular geometrically finite kleinian groups satisfy hypotheses theorem establishes theorem corollary theorem actions strongly contracting elements let remark combining work recent work yang one apply theorem even general cases following call element strongly contracting action cay sense quasigeodesic exists geodesics cay whose distance hgi least diameter image nearest point projection hgi bounded wenyuan yang recently announced whenever action cay strongly contracting element growth quasitight pure exponential growth respect combining theorem yang result obtain following corollary let group finite generating set suppose cayley graph cay strongly contracting element geodesic combing nonelementary action hyperbolic space set loxodromic elements generic respect genericity respect markov chain approach deduce typical properties elements typical long term behavior long paths associated graph structure also obtain general theorem generic elements sample paths markov chain may independent interest precisely almost semisimple graph defines markov chain vertices see hence defines markov measure set infinite paths initial vertex markov chains prove following theorem let almost semisimple nonelementary graph structure let every sample path sequence converges point counting loxodromics exist finitely many constants every sample path exists index denote one lim consequence loxodromic random walks illustration previous result given looking random walks let group generating set random walk process defined taking uniformly random among elements considering sample path prove following answers question kapovich theorem let nonelementary group isometries separable hyperbolic metric space let finite generating set consider random walk defined let corresponding measure set sample paths loxodromic proof let consider free group generated standard word metric composing projection action think group isometries standard geodesic combing whose graph one component hence proposition graph structure thick hence nonelementary result follows theorem previous results beginning gromov influential works large literature devoted studying typical behavior finitely generated groups recent developments found example one takes definition genericity respect random walks instead using counting balls genericity loxodromics established many cases particular question genericity mapping class group goes back least random walks proven independently rivin maher relates setup mapping class iff acts loxodromically curve complex genericity loxodromics random walks groups isometries hyperbolic spaces established increasing level generality let note general counting balls counting random walks need yield result fact important problem establish whether harmonic measure random walk coincide measure given taking limits counting measures balls many results area show two measures coincide except particular cases existence result random walk harmonic measure coincide due hyperbolic groups gekhtman taylor tiozzo counting balls wiest recently showed group satisfies weak automaticity condition action hyperbolic space satisfies strong geodesic word hypothesis loxodromics make definite proportion elements ball geodesic word hypothesis essentially requires geodesics group given normal forms project unparameterized quasigeodesics space orbit map work hand assume nice property action except isometries let note theorems answer hypotheses two papers overlap open problems section acknowledgments thank yago useful suggestions clarifications first author partially supported nsf grant erc advanced grant moduli ursula second author partially supported nsf grants third author partially supported nserc connaught fund background material since graph structures play central role work begin discussing details reader notices much inspired theory regular languages automatics groups place special focus graph parameterizes language thus terminology may differ literature graph structures general framework follows define graph structure pair countable group directed finite graph labeled vertex called initial vertex every vertex exists directed path every edge labeled group element edges directed fixed vertex distinct label thus exists evaluation map map extends set finite paths concatenating edge labels say graph structure respect denote set finite paths starting set finite paths call graph structure surjective case generates semigroup surjective graph structure geodesic path word length equal length path case paths evaluate naturally geodesic paths cayley graph cay finally graph structure called injective injective example path labels shortlex geodesic representative evaluation respect ordering injective bijective geodesic graph structure respect called geodesic combing respect note evaluation map restricted factors set words alphabet image words spelled starting called language parameterized recognized language prefix closed construction initial subword recognized word also recognized warn reader references differ exact meaning terms example use term combing refer language bijective geodesic graph structure rather graph structure since interested dynamical properties graph parameterizing language geodesics choose emphasize graph structure counting loxodromics almost semisimple graphs let summarize fundamental properties graphs markov chains much material appears refer article details proofs let finite directed graph vertex set adjacency matrix matrix mij defined mij number edges graph almost semisimple growth following hold initial vertex denote vertex directed path largest modulus eigenvalues eigenvalue modulus geometric multiplicity algebraic multiplicity coincide denote set finite paths set finite paths starting set finite paths starting path use denote terminal vertex similarly denote set infinite paths set infinite paths starting given two vertices directed graph say accessible write path two vertices mutually accessible mutual accessibility equivalence relation equivalence classes called irreducible components subset define growth lim sup set paths starting length vertex lies component let denote set finite paths based lie entirely moreover path let set finite paths written concatenation path contained entirely definition irreducible component called maximal equivalently growth equals vertex maximal belong vertex maximal growth moreover say vertex large growth exists path vertex maximal component small growth otherwise definition every vertex loop semigroup set loops graph begin end semigroup respect concatenation loop primitive concatenation two loops let almost semisimple graph growth exist constants lemma vertex large growth paths length vertex small growth paths length belongs maximal component paths length gekhtman taylor tiozzo also lemma paths length markov chains given almost semisimple graph growth edge set one constructs markov chain vertices follows large growth set probability going mij small growth set vertex measure induces measures pnv space finite paths starting simply setting pnv path starting similarly measure extended measure space infinite paths starting important cases measures set finite infinite respectively paths starting denote measure defines markov chain space consider random variable defined concatenation first edges infinite path order compare distribution markov chain counting measure let denote set paths ending vertex large growth note lemma exists turns see lemma respect choice measure vertex belongs maximal irreducible component recurrent path positive probability whenever path another vertex positive probability also path positive probability reason maximal components also called recurrent components almost every path markov chain exists one recurrent component path lies completely time visits vertex infinitely many times thus recurrent component let set infinite paths initial vertex enter remain inside forever denote conditional probability moreover recurrent vertex distribution return times decays exponentially min denotes first return time vertex associate recurrent vertex markov chain random walk use previous results random walks prove statements asymptotic behavior markov chain counting loxodromics sample path let define time path lies vertex formulas min simplify notation write instead sample path fixed define first return measure set primitive loops setting primitive loop edges extend entire loop semigroup setting primitive since almost every path starting visits infinitely many times measure probability measure equation every recurrent vertex first return measure finite exponential moment exists constant hyperbolic spaces paper always geodesic separable metric space space called every geodesic triangle side contained within two sides hyperbolic space gromov boundary refer reader section section definitions properties isometry translation length defined limit depend choice order estimate translation length use following lemma see example proposition lim lemma exists constant depends isometry space translation length given isometry loxodromic positive translation length case two fixed points say two loxodromic elements independent fixed point sets disjoint semigroup group isom nonelementary contains two independent loxodromics use following criterion proposition proposition let semigroup isometries hyperbolic metric space limit set boundary nonempty finite orbit nonelementary finally turn definition basic properties shadows space shadow around based gekhtman taylor tiozzo usual gromov product distance parameter definition number additive constant depending measures distance indeed geodesic travels geodesic distance following observation lemma metric space random walks probability measure said nonelementary respect action semigroup generated support nonelementary need fact random walk whose increments distributed according nonelementary measure almost surely converge boundary positive drift theorem theorems let countable group acts isometries separable hyperbolic space let nonelementary probability distribution fix let sample path random walk independent increments distribution almost every sample path converges point boundary resulting hitting measure nonatomic moreover finite first moment constant almost every sample path lim constant theorem called drift random walk behavior generic sample paths markov chain let group nonelementary action hyperbolic space section assume graph structure almost semisimple nonelementary convergence boundary section show almost every sample path markov chain converges boundary since assuming graph structure nonelementary exact proof theorem yields following theorem every path markov chain projection space converges point boundary consequence every vertex large growth harmonic measure namely hitting measure markov chain borel define lim previous proof also provides decomposition theorem harmonic measures set recurrent vertices counting loxodromics sum finite paths meet recurrent vertex terminal endpoint note recurrent harmonic measure random walk generated measure discussed lemma large growth measure proof since random walk measures measures hence equation measure also linear combination measures positive drift along geodesics section show almost every sample path positive drift theorem every sample path exists recurrent component lim depends since finite gives finitely many potential drifts markov chain proof let recurrent vertex since graph structure nonelementary loop semigroup nonelementary hence random walk given return times positive drift precisely theorem exists constant almost every sample path enters lim morever distribution return times finite exponential moment almost every sample path one lim two facts imply lim almost every infinite path visits every vertex recurrent component infinitely often thus recurrent vertex belongs component exists constant every path limit let maximal component vertices goal prove let pick path limit exists define equivalence relation since differ one generator uniformly bounded hence lim lim equivalence relation satisfies hypothesis lemma hence unique limit lim lim gekhtman taylor tiozzo corollary every vertex large growth every sample path exists recurrent component accessible proof theorem every path recurrent component let path lim passes drift equals path positive probability belongs moreover hence theorem lim lim required application section need following convergence measure statement let denote min recurrent smallest drift corollary large growth proof theorem sequence random variables wnnx converges almost surely function finitely many values moreover every variable bounded lipschitz constant orbit map thus converges yielding claim decay shadows shadow denote closure since harmonic measures markov chain nonatomic lemma get proof following decay shadows results proposition decay shadows proposition exists function large growth distance parameter shadow proposition proposition exists function vertex shadow distance parameter shadow generic elements respect counting measure use results generic paths markov chain obtain results generic paths respect counting measure counting loxodromics genericity positive drift first result drift positive along generic paths theorem let almost semisimple nonelementary graph structure smallest drift given every one result follows corollary similarly proof theorem proof let denote set paths know corollary one path length denote prefix length blog observe know proposition first term tends writing gbh log implies hence exists less log whenever sufficiently large proves inclusion lemma hence equation considering size finally using get lim sup proves claim lim sup decay shadows counting measure set usual shadow around centered basepoint need following decay property proposition function every start following lemma basic calculus gekhtman taylor tiozzo lemma let decreasing function exists function proof let define inductively follows max thus immediate definition prove note exists hence either thus limit point proof proposition pick path length let denote longest subpath starting initial vertex ends vertex large growth let write second part path note hlx lipschitz constant orbit map hence lemma note element choices continuation hence using proposition lemma replace choosing thus getting thus previous estimate becomes proves lemma one sets genericity loxodromics use previous counting results prove loxodromic elements generic respect counting measure strategy apply formula lemma show translation length grows linearly function length path order one needs show distance large theorem hand gromov product large trick split path two subpaths roughly length show first second half paths almost independent counting loxodromics define precisely let denote path define initial part subpath given first edges terminal part subpath given last edges definition moreover define random variables note definition markov property paths next lemma use notation refer recurrent component sample path eventually belongs theorem lemma lim proof note definition shift space infinite paths note every markov property let define function note corollary every vertex large growth every moreover every path lies entirely component point true shifted path almost surely hence side tends corollary proving claim show generically fellow travel argument let denote set paths start length lemma let function proof compute gekhtman taylor tiozzo fixing forgetting requirement fixing value hence decay shadows proposition follows shown almost independent still need show also almost independent order note beginning beginning use following trick hyperbolic geometry see lemma fellow traveling contagious let space basepoint let points order apply lemma need check first half first half generically fellow travel lemma probability proof consider set theorem every sample path know hence one hence proof theorem get lim finally writing gromov product triangle inequality fact action isometric get combined proves first half claim counting loxodromics second claim follows analogously namely theorem lemma implies conclude use use lemma fellow traveling contagious show gromov products grow fast respect counting measures proposition let function proof define min easy see lemma know using lemmas probability conditions hold tends hence finally put together previous estimates use lemma prove translation length grows linearly loxodromic elements generic theorem linear growth translation length let almost semisimple nonelementary graph structure smallest drift given consequence loxodromic gekhtman taylor tiozzo proof set proposition theorem events occur probability tends hence lemma approaches implies statement choose second statements follows immediately since elements positive translation length loxodromic genericity loxodromics markov chain remark similar proof yields loxodromics generic every sample path markov chain precisely following reformulation theorem theorem let almost semisimple nonelementary graph structure let smallest drift every one consequence loxodromic proof proof similar proof theorem sketch first using markov property establish lim choice function using positivity drift proof lemma prove lim lim previous three facts using lemma one proves lim theorem follows immediately fact corollary applying formula lemma thick graph structures definition graph structure thick every vertex maximal growth exists finite set loop semigroup greater generality subgroup say graph structure thick relatively vertex maximal growth exists finite set counting loxodromics case one component say component least one closed path positive length entirely contained proposition graph structure one component thick proof let unique maximal component every finite path graph written path initial vertex path entirely path going construction lengths uniformly bounded fix vertex let shortest path last vertex let shortest path last vertex one write sgt vary finite set sgt hence finite set thick implies nonelementary proposition fix action hyperbolic metric space let almost semisimple graph structure nonelementary subgroup thick relatively nonelementary maximal vertex action loop semigroup nonelementary proof since action nonelementary exists free subgroup rank embeds hence orbit map extends embedding identify image thickness implies taking limit sets see conclude infinite complete proof nonelementary suffices show fixed point suppose toward contradiction fixed point let write free generators consider sequence elements since finite may pass subsequence assume fixes point hence sequence elements fix point since free group implies agree powers infinitely many clear contradiction proposition theorem get theorem let nonelementary action countable group separable hyperbolic metric space suppose almost semisimple graph structure thick respect nonelementary subgroup loxodromic elements generic lim loxodromic gekhtman taylor tiozzo fact translation length generically grows linearly exists lim completes proof theorem introduction relative growth quasitightness fix graph structure practice often show graph structure thick establishing property growth quasitightness property introduced studied notion quasitightness depends particular graph structure given path say contains element contains subpath denote set paths starting initial vertex contain definition graph structure called growth quasitight exists every set density zero respect general given subgroup say growth quasitight relatively exists constant every set density zero growth quasitight implies thick proposition let almost semisimple graph structure subgroup growth quasitight relatively thick relatively proof let component maximal growth let vertex let path initial vertex denote length let growth quasitightness plus maximal growth path form contains entirely contained since length path contains awb let shortest path initial vertex shortest path terminal vertex vary finite set since completes proof combining proposition theorem get theorem let almost semisimple graph structure growth quasitight respect nonelementary subgroup loxodromic elements generic completes proof theorem counting loxodromics infinite index subgroups zero density section prove general setup subgroup infinite index zero density respect counting combined going prove sections immediately implies theorem introduction recall evaluation map paths starting theorem let injective almost semisimple thick graph structure let infinite index subgroup proportion paths starting spell element goes length path goes proof adaptation theorem case consider extension defined follows vertex set edge edge hgg lemma let component maximal growth infinitely many reached path contained proof suppose points reached manner set size consider thickness exists finite set path lying starting ending lie lifts path assumption implies thus hence thus finite subset neumann theorem must finite index giving contradiction following general result markov chains lemma lemma let markov chain countable set stationary measure let set points means positive probability path combining lemmas obtain corollary lying maximal component number paths length proof markov chain restricts markov chain turn lifts markov chain induced graph vertex set obtained assigning edge transition probability projection stationary measure given taking product stationary measure counting measure vertex positive measure lifts equal positive measure thus lemma implies corollary follows applying lemma chain gekhtman taylor tiozzo note paths length bijection paths length beginning ending evaluating elements thus obtain corollary lying maximal component number paths length beginning ending evaluating elements complete proof theorem given let resp set paths length spend time resp components note consider path decompose length adding contained maximal component since path spends time nonmaximal components possibilities depends graph hand corollary path possibilities thus hence fixing see lim sup true arbitrary get claimed application relatively hyperbolic groups section show main theorem applies large class relatively hyperbolic groups let finitely generated group collection subgroups following let recall hyperbolic relative compactum acts geometrically finitely maximal parabolic subgroups elements compactum unique homeomorphisms called bowditch boundary denote precisely let act homeomorphisms compact perfect metrizable space point called conical sequence distinct points point called bounded parabolic stabilizer infinite acts cocompactly say action convergence action acts properly discontinuously triples elements action geometrically finite convergence action every point either conical limit point bounded parabolic point note countably many parabolic points finally maximal parabolic subgroups stabilizers bounded parabolic points refer reader relevant background material fix relatively hyperbolic group generating set let denote distance respect let vertices cay induced metric denote cay corresponding electrified cayley graph cayley graph respect generating set remind reader cay hyperbolic naturally includes subspace complement collection parabolic fixed points counting loxodromics following bowditch boundary equipped quasiconformal nonatomic measure given construction taking average balls word metric definition define relatively hyperbolic group pleasant action measure ergodic also see proposition relatively hyperbolic group pleasant admits geometrically finitely action cat proper metric space instance geometrically finite kleinian groups satisfy hypothesis note admits action theorem works isometric actions hyperbolic metric space section prove following result theorem let pleasant relatively hyperbolic group let geodesic combing nonelementary action hyperbolic metric space graph structure nonelementary combining result theorem discussion section fact relatively hyperbolic groups pure exponential growth generating set theorem establishes theorem introduction fact using recent work yang theorem may extended nontrivial relatively hyperbolic groups relatively hyperbolic groups contain strongly contracting elements see corollary introduction however give argument fellow traveling cayley graph space need following proposition certainly known experts provide proof completeness proposition following holds suppose geodesic cay length least projects quasigeodesic cay let geodesic cay whose endpoints distance cay use following theorem osin theorem theorem sides geodesic triangle cay vertex exists vertex either proof proposition suppose theorem let geodesics cay joining endpoints respectively note assumption initial terminal endpoints geodesics less one another pick vertices ordered possible since consider geodesic quadrilateral opposite sides applying theorem twice may find vertices using example lemma find vertices respectively gekhtman taylor tiozzo depends note moreover since occur along similarly putting everything together setting see less completes proof measures sphere averages continuing notation previous section let exponent convergence cay lim log denotes ball radius respect definition define large shadow set exists geodesic cay converging intersecting similarly small shadow set every geodesic cay converging intersects theorem proposition yang constructs ergodic density without atoms word metric bowditch boundary lemma shows satisfies shadow lemma large enough uniform multiplicative constant particular full support follows denotes set elements lemma borel set one lim sup denote closure proof let borel set since number elements ball radius cay universally bounded point lies small shadows depends thus moreover denote indeed geodesic identity meets hence thus since large enough counting loxodromics exponential growth shadow lemma growth quasitightness relatively hyperbolic groups establish form relative growth quasitightness relatively hyperbolic group let element infinite path form cayley graph cay geodesic segment joining identity course may finitely many choices definition element called arc length parameterization cayley graph cay projects electrified graph cay following lemma see example adt lemma function every quasigeodesic cayley graph cay recall means endpoints hausdorff distance subpath endpoints span given say finite infinite geodesic contains exists let set exists geodesic identity contain every pass within distance proposition every ehn remark fixed suffices prove proposition sufficiently long sufficiently large prove proposition using ergodicity double boundary proposition apply proposition several times boundedness constant hence fix consider constant produced proposition function alone write let set pairs every geodesic cay joining exist infinitely many passes within let set pairs every geodesic joining elements passes within least definition moreover sets subsets furthermore lemma constant contains pair conical points every proof let function given lemma definition projects cay hence two distinct limit points bowditch boundary connecting points gekhtman taylor tiozzo geodesic cay using theorem one constructs geodesic cay connects travels hence lemma set nonempty interior precisely interior contains every pair conical points lemma proof suppose pair conical points pick geodesic joining definition segments points let particular uniform projections geodesics cay travel projection longer longer intervals hence passes since geodesic segment projects quasigeodesic may apply proposition find constants exist points setting see sufficiently large completes proof since pleasant action ergodic hence lemma implies lemma set full measure hence hypotheses set full measure hence ergodicity measure either proof set since nonempty interior measure full support must full measure second claim follows since let set conical points every geodesic ray identity converging infinitely many points passes within distance using proposition lemma lemma either implies corollary set full measure lemma closure contained proof false large would sequence converging since parabolic fixed point one view belonging boundary cay projections cay geodesics must travel projection cay longer longer intervals independent counting loxodromics proof lemma would obtain applying proposition two constants large geodesic contains hence obtain contradiction completes proof position prove proposition proof proposition lemma corollary hence applying lemma large enough lim sup loop semigroup nonelementary assume pleasant relatively hyperbolic group admits geodesic combing generating set recall work parabolic subgroups geodesically completable every generating set extended generating set geodesic combing use generating set constant let recall set paths directing graph initial vertex contain one write identifying paths identity group elements immediate definition hence proposition also zero density proposition let geodesic combing pleasant relatively hyperbolic group nonelementary proof going prove graph structure thick relative nonelementary free subgroup yields claim proposition let vertex maximal growth word let diam let constant proposition let group element representing path initial vertex consider set since maximal growth zero density set contains path belong path length entirely contained component containing contains subpath form awb length less let path start path end length swt finite set complete proof suffices show contains nonelementary subgroup proposition using standard argument construct free subgroup embeds cay embeds indeed random subgroup property let finite subset produced enlarged contain least one cyclically reduced hence hence required gekhtman taylor tiozzo double ergodicity conclude section proving group admits geometrically finite action cat space pleasant let assume acts geometrically finitely cat space recall orbit map induces embedding theorem identify bowditch boundary continue denote pushforward measure proposition suppose acts geometrically finitely cat space action ergodic respect remind reader quasiconformal respect word metric rather metric proof assume acts geometrically finitely cat space elements parabolic subgroups recall bowditch boundary identified gromov boundary let still denote word metric let lim sup busemann functions word metric yang lemma every conical lim sup lim inf moreover measure word metric gives full measure conical points quasiconformal density sense claim measure measure class indeed let lim sup define locally finite measure diag claim measure uniformly bounded derivative indeed compute lim sup lim sup lim sup lim sup could distribute limsup since limsup liminf within bounded difference see hence combining one gets cocycle uniformly bounded hence general fact ergodic theory cocycle also coboundary see proposition thus exists measure counting loxodromics measure class hence also measure class measure supported conical limit points thus also supported pairs conical limit points theorem radon measure double boundary cat space gives full measure pairs conical limit points ergodic thus ergodic raags racgs graph products let finite simplicial undirected graph recall corresponding artin group raag group given presentation corresponding coxeter group racg group obtained adding relators case called set standard vertex generators group greater generality let finite simplicial graph vertex let pick finitely generated group call vertex group define graph product group generated vertex groups relation two vertex groups commute corresponding vertices joined edge clearly raags special cases graph products racgs graph products graph products first introduced green received much attention see example section going apply counting techniques graph products geodesic combing graph products let call group admissible geodesic combing respect finite generating set language previous sections admissible generating set recall recurrent component nontrivial contains least one closed path component terminal path exiting graph structure recurrent every vertex admits directed path every vertex initial one recall given graph opposite graph graph vertex set assume anticonnected opposite graph connected implies direct product graph products associated subgraphs proposition let anticonnected choose vertex group geodesic combining generating set graph product generating set admits geodesic combing recurrent call generating set proposition standard generating set note agrees standard vertex generators special case artin coxeter groups proof proposition provide explicit construction recurrent graph structure standard generators provided construction geodesic combing however recurrent next lemmas show anticonnected modify construction order make gekhtman taylor tiozzo recurrent course necessary assume anticonnected counting theorems fail raags decompose direct products see example let first review construction first introduce total ordering vertices first two vertices ordering adjacent vertex labeled capital letter pair vertices adjacent one constructs tree following way word finite sequence adjacent least one vertex among given tree finite directed tree whose vertices labeled letters whose paths spell exactly words particular tree root one edge coming vertex endpoint follows directed edge always label terminal vertex moreover define header graph terminology graph one vertex letter edge finally construct graph structure corresponding racg follows consider union initial vertex header graph trees first one identifies vertex header graph root admissible tree possible one adds one edge vertex header graph adjacent one joins directed edge vertex labeled union trees vertex tree shown graph gives bijective geodesic graph structure section proposition fact show recognizes geodesic language normal forms respect ordering vertices need stronger fact let subgraph obtained removing vertices header graph initial vertex subgraph induced vertices trees excluding initial vertex tree labeled lemma anticonnected graph irreducible one directed path vertex vertex hence unique nontrivial recurrent component component terminal proof show indeed irreducible suffice since header graph directed loops directed edges increase ordering edges leaving construction since unique type vertex tree directed path vertices vertex tree suffices show vertex reach type vertex tree hence fix vertex vertices main point type vertex admissible tree joined type vertex tree hence suffices get type vertex admissible tree let type counting loxodromics fix path complement graph use anticonnected adjacent get type vertex inductively follows either first case type vertex admissible tree containing along directed path since consecutive pair fixed path adjacent condition holds automatically second case edge unique vertex tree call vertex either case get directed path type vertex repeat argument produce path type vertex continuing manner produce type since directed edge type vertex tree discussed completes proof know union admissible trees excluding initial vertices irreducible graph however header graph construction irreducible however following lemma observe words spell header graph also spelled one admissible trees hence modify essentially removing header graph order get recurrent graph recognizes language lemma anticonnected exists recurrent graph recognizes language proof assume anticonnected vertices ordered first two vertices commute adjacent modify resulting graph still recognizes language recurrent modification simple requires one observation note strictly increasing sequence spelled tree starting vertex fact since required condition whenever vertex adjacent however two letters greater adjacent construction similarly thus new graph given removing header graph joining initial vertex vertex tree word recognized made increasing word followed word spelled union admissible trees word spelled spelling increasing sequence tree second part proves claim graph recurrent graph lemma gives bijective geodesic graph structure coxeter group modify construction produce geodesic combing graph product let graph structure vertex group let initial vertex let labels edges going let targets edges respectively moreover let subgraph given removing initial vertex construct graph structure let consider disjoint union vertex serve initial vertex copy vertex type moreover edge type let connect vertex corresponding vertices corresponding edges labeled respectively finally edge vertex type let gekhtman taylor tiozzo connect new initial vertex vertices edges labeled respectively new graph gives bijective geodesic structure respect standard generators follows since construction parameterizes language geodesic normal forms given moreover since modeled recurrent graph one easily sees recurrent completes proof proposition corollary let graph product admissible groups decompose direct product exists thick graph structure standard generating set proof proposition graph structure given proposition thick since recurrent consequence thickness ready establish following counting result loxodromics theorem let infinite graph product admissible groups decompose product infinite groups let set standard vertex generators nonelementary action separable hyperbolic space set loxodromics action generic respect loxodromic exact exponential growth raags racgs conclude proving fine estimate number elements ball raags racgs theorem let artin coxeter group virtually cyclic decompose product infinite groups let consider standard generating set exists constants following limit exists lim say group satisfies exact exponential growth let remark property invariant respect metric hence depends carefully generating set fact theorem follow immediately following theorem general graph products theorem let graph product admissible groups assume anticonnected group split trivially product least vertices exact exponential growth note makes sense assume number vertices least fact group geodesic combing free product two admissible groups particular raag must free group generators exact exponential growth racg must virtually cyclic counting loxodromics let remark growth function graph products worked chiswell see also however seem obvious prove exact exponential growth method let consider recurrent graph defined previous section denote previous section know irreducible final step proof theorem following lemma lemma anticonnected least vertices graph aperiodic proof let assume consistently previous section vertices ordered let call three smallest vertices assume adjacent let observe sequences bac babc admissible hence tree subtree five vertices one labeled two labeled let denote two labeled let denote paths subtree since graph irreducible exists path let denote vertices definition type smaller adjacent thus construction also edge hence graph two loops one loop given since lengths two closed paths differ one greatest common divisor lengths paths hence aperiodic note statement false number vertices indeed one loop length hence period let consider general graph product previous section replacing vertices graphs recognize geodesic combings vertex group get new graph gives geodesic combing previous lemma get corollary anticonnected least three vertices graph irreducible aperiodic proof theorem since graph irreducible aperiodic perronfrobenius theorem adjacency matrix unique eigenvalue maximum modulus eigenvalue real positive simple moreover coordinates corresponding eigenvector positive finally sequence converges projection eigenspace particular none basis vectors orthogonal eigenvector hence exists cij cij lim path length initial vertex starts edge irreducible graph hence cij establishes exact exponential growth gekhtman taylor tiozzo references yago antolin laura ciobanu finite generating sets relatively hyperbolic groups applications geodesic languages trans amer math soc arzhantseva cashen tao growth tight actions pacific math adt tarik aougab matthew gentry durham samuel taylor pulling back stability applications relatively hyperbolic groups lond math soc goulnara arzhantseva lysenok growth tightness word hyperbolic groups mathematische zeitschrift arzhantseva olshanskii generality class groups subgroups lesser number generators free mat zametki athreya prasad growth groups monoids available arzhantseva generic properties finitely presented groups howson theorem communications algebra mladen bestvina mark feighn hyperbolicity complex free factors adv math martin bridson aandre haefliger metric spaces curvature vol springer baik jim howie pride identity problem graph products groups journal algebra alexandre borovik alexei myasnikov vladimir remeslennikov multiplicative measures free groups internat algebra comput brian bowditch tight geodesics curve complex invent math brian bowditch relatively hyperbolic groups international journal algebra computation danny calegari ergodic theory hyperbolic groups geometry topology contemp math james cannon combinatorial structure cocompact discrete hyperbolic groups geometriae dedicata danny calegari koji fujiwara combable functions quasimorphisms central limit theorem ergodic theory dynamical systems christophe champetier statistiques des groupes finie adv math ian chiswell growth series graph product bulletin london mathematical society chris connell roman muchnik harmonicity quasiconformal measures poisson boundaries hyperbolic spaces geom funct anal danny calegari joseph maher statistics compression scl ergodic theory dynamical systems cornelia shahar mozes mark sapir divergence lattices semisimple lie groups graphs groups transactions american mathematical society tushar das david simmons mariusz geometry dynamics gromov hyperbolic metric spaces emphasis settings arxiv preprint nathan dunfield william thurston finite covers random invent math david epstein mike paterson james cannon derek holt silvio levy william thurston word processing groups peters benson farb relatively hyperbolic groups geom funct anal furman perspective negatively curved manifolds groups rigidity dynamics geometry burger iozzi eds springer ghys harpe eds sur les groupes hyperboliques mikhael gromov progress mathematics vol boston boston papers swiss seminar hyperbolic groups held bern counting loxodromics daniel groves jason fox manning dehn filling relatively hyperbolic groups israel journal mathematics maucourant entropy drift word hyperbolic groups arxiv preprint elisabeth ruth green graph products groups thesis university leeds mikhael gromov hyperbolic groups springer gromov asymptotic invariants infinite groups geometric group theory vol vol cambridge university press mikhail gromov random walk random groups geometric functional analysis gekhtman taylor tiozzo counting loxodromics hyperbolic actions available haglund finite index subgroups graph products geometriae dedicata mark hagen weak hyperbolicity cube complexes groups journal topology hermiller meier algorithms geometry graph products groups journal algebra michael handel lee mosher free splitting complex free group hyperbolicity geometry topology derek holt sarah rees claas groups languages automata london mathematical society student texts vol cambridge university press cambridge hruska relative hyperbolicity relative quasiconvexity countable groups algebr geom topol tim hsu daniel wise linear residual properties graph michigan mathematical journal vadim kaimanovich ergodicity harmonic invariant measures geodesic flow hyperbolic spaces reine angew math ilya kapovich nadia benakli boundaries hyperbolic groups combinatorial geometric group theory new york kim thomas koberda geometry curve graph artin group international journal algebra computation ilya kapovich alexei myasnikov paul schupp vladimir shpilrain complexity decision problems group theory random walks algebra ilya kapovich igor rivin paul schupp vladimir shpilrain densities free groups visible points test elements math res lett ilya kapovich paul schupp genericity shanskii method isomorphism problem groups math ann joseph maher random walks mapping class group duke math holger meinert invariant graph products groups journal pure applied algebra john meier graph product hyperbolic groups hyperbolic geometriae dedicata howard masur yair minsky geometry complex curves hyperbolicity invent math mathieu sisto deviation inequalities random walks arxiv preprint joseph maher giulio tiozzo random walks weakly hyperbolic groups appear reine angew math bernhard neumann groups covered finitely many cosets publ math debrecen walter neumann michael shapiro automatic structures rational growth geometrically finite hyperbolic groups invent math shanskii almost every group hyperbolic internat algebra comput denis osin relatively hyperbolic groups intrinsic geometry algebraic properties algorithmic problems vol american mathematical gekhtman taylor tiozzo denis osin acylindrically hyperbolic groups trans amer math soc david radcliffe rigidity graph products groups algebraic geometric topology igor rivin walks groups counting reducible matrices polynomials surface free group automorphisms duke math alessandro sisto contracting elements random walks reine angew math samuel taylor giulio tiozzo random extensions free groups surface groups hyperbolic int math res bert wiest genericity loxodromic actions arxiv preprint wenyuan yang measures growth relatively hyperbolic groups available statistically actions groups contracting elements available department mathematics yale university hillhouse ave new address department mathematics temple university north broad street philadelphia address department mathematics university toronto george toronto canada address tiozzo
4
xploring pace lack box attacks eep eural etworks dec arjun nitin bhagoji department electrical engineering princeton university warren dawn song eecs department university california berkeley bstract existing attacks deep neural networks dnns far largely focused transferability adversarial instance generated locally trained model transfer attack learning models paper propose novel gradient estimation attacks adversaries query access target model class probabilities rely transferability also propose strategies decouple number queries required generate adversarial sample dimensionality input iterative variant attack achieves close adversarial success rates targeted untargeted attacks dnns carry extensive experiments thorough comparative evaluation attacks show proposed gradient estimation attacks outperform transferability based attacks tested mnist datasets achieving adversarial success rates similar well known attacks also apply gradient estimation attacks successfully content moderation classifier hosted clarifai furthermore evaluate attacks defenses show gradient estimation attacks effective even defenses ntroduction ubiquity machine learning provides adversaries opportunities incentives develop strategic approaches fool learning systems achieve malicious goals many attack strategies devised far generate adversarial examples fool learning systems setting adversaries assumed access learning model szegedy goodfellow carlini wagner however many realistic settings adversaries may access model knowledge details learning system parameters may query access model predictions input samples including class probabilities example find case popular commercial offerings ibm google clarifai access query outputs class probabilities training loss target model found without access entire model adversary access gradients required carry attacks existing attacks dnns focused transferability based attacks papernot papernot adversarial examples crafted local surrogate model used attack target model adversary direct access exploration attack strategies thus somewhat lacking far literature paper design powerful new attacks using limited query access learning systems achieve adversarial success rates close attacks attacks help understand extent threat posed deployed systems adversarial samples code reproduce results found https new attacks propose novel gradient estimation attacks dnns adversary assumed query access target model attacks need work done visiting berkeley figure sample adversarial images gradient estimation attacks clarifai content moderation model left original image classified drug confidence right adversarial sample classified safe confidence access representative dataset knowledge target model architecture gradient estimation attacks adversary adds perturbations proportional estimated gradient instead true gradient attacks goodfellow kurakin since direct gradient estimation attack requires number queries order dimension input explore strategies reducing number queries target model also experimented simultaneous perturbation stochastic approximation spsa particle swarm optimization pso alternative methods carry attacks found gradient estimation work best strategies propose two strategies random feature grouping principal component analysis pca based query reduction experiments gradient estimation attacks models mnist dimensions dimensions datasets find match attack performance achieving attack success rates attacks untargeted case iterative attacks targeted untargeted cases achieve performance queries per sample attacks around queries iterative attacks much fewer closest related attack chen achieve similar success rates attack running time attack longer adversarial sample see section advantage gradient estimation attack require adversary train local model could expensive complex process datasets addition fact training local model may require even queries based training data attacking systems demonstrate effectiveness gradient estimation attacks real world also carry practical attack using methods safe work nsfw classification content moderation models developed clarifai choose due socially relevant application models begun deployed moderation liu makes attacks especially pernicious carry attacks knowledge training set demonstrated successful attacks figure around queries per image taking around minute per image figure target model classifies adversarial image safe high confidence spite content moderated still clearly visible note due nature images experiment show one example others may offensive readers full set images found https comparative evaluation attacks carry thorough empirical comparison various attacks given table mnist datasets study attacks require zero queries learning model including addition perturbations either random proportional difference means original targeted classes well various transferability based attacks show proposed gradient estimation attacks outperform attacks terms attack success rate achieve results comparable attacks addition also evaluate effectiveness attacks dnns made robust using adversarial training goodfellow szegedy recent variants including ensemble adversarial training iterative adversarial training madry find although standard ensemble adversarial training confer robustness attacks vulnerable iterative gradient estimation attacks adversarial success rates excess targeted untargeted attacks find methods outperform attacks achieve performance comparable attacks iterative adversarial training quite robust attacks test summary contributions include propose new gradient estimation attacks using queries target model also investigate two methods make number queries needed independent input dimensionality conduct thorough evaluation different attack strategies classifiers mnist datasets find small number queries proposed gradient estimation attacks outperform transferability based attacks achieve attack success rates matching attacks carry practical attacks clarifai safe work nsfw classification content moderation models public apis show generated adversarial examples mislead models high confidence finally evaluate attacks adversarial training based defenses dnns find standard ensemble adversarial training robust gradient estimation attacks even iterative adversarial training methods outperform transferability based attacks related work existing attacks use local model first proposed convex inducing classifiers nelson malware data use genetic algorithms craft adversarial samples dang use hill climbing algorithms methods prohibitively expensive data images papernot proposed using queries target model train local surrogate model used generate adversarial samples attack relies transferability best knowledge previous literature attacks deep learning setting independent work narodytska kasiviswanathan chen narodytska kasiviswanathan propose greedy local search generate adversarial samples perturbing randomly chosen pixels using large impact output probabilities method uses queries per iteration greedy local search run around iterations image resulting total queries per image much higher attacks find methods achieve higher targeted untargeted attack success rates mnist compared method chen propose attack method named zoo also uses method finite differences estimate derivative function however propose attacks compute adversarial perturbation approximating fgsm iterative fgs zoo approximates adam optimizer trying perform coordinate descent loss function proposed carlini wagner neither works demonstrates effectiveness attacks systems defenses background valuation setup section first introduce notation use throughout paper describe evaluation setup metrics used remainder paper full set attacks evaluated given table appendix also provides taxonomy attacks otation classifier function mapping domain set classification outputs case binary classification set class labels number possible classification outputs set parameters associated classifier throughout target classifier denoted dependence dropped clear context denotes constraint set adversarial sample must satisfy used represent loss function classifier respect inputs true labels loss functions use standard loss denoted xent logit loss carlini wagner denoted logit described section adversary generate adversarial example xadv benign sample adding appropriate perturbation small magnitude szegedy adversarial example xadv either cause classifier misclassify targeted class targeted attack class ground truth class untargeted attack since attacks analyze focus neural networks particular also define notation specifically neural networks outputs penultimate layer neural network representing output network computed sequentially preceding layers known logits represent logits vector final layer neural network used classification usually softmax layer represented vector probabilities pfi pfi valuation setup empirical evaluation carried sections neural networks mnist lecun cortes krizhevsky hinton datasets details datasets architecture training procedure models given datasets mnist dataset images handwritten digits lecun cortes training examples test examples image belongs single class images dimension pixels total grayscale pixel value lies digits centered dataset used commonly benchmark classifiers use dataset since extensively studied attack perspective previous work dataset color images classes krizhevsky hinton images belong mutually exclusive classes airplane automobile bird cat deer dog frog horse ship truck training examples test examples exactly examples class images dimension pixels total channels red green blue pixel value lies odel training details section present architectures training details normally adversarially trained variants models mnist datasets accuracy model benign data given table mnist pixel mnist image data scaled trained four different models mnist dataset denoted models used represent good variety architectures attacks constrained distance vary adversary perturbation budget since perturbation budget image made solid gray model details models trained mnist dataset follows model parameters conv relu conv relu dropout relu dropout softmax model parameters dropout conv relu conv relu conv relu dropout softmax model parameters conv relu conv relu dropout relu dropout softmax model parameters relu dropout softmax models convolutional layers well fully connected layers also order magnitude parameters model hand fully connected layers order magnitude fewer parameters similarly model convolutional layers fewer parameters models models achieve greater classification accuracy test data model achieves classification accuracy due lack convolutional layers pixel image data choose three model architectures dataset denote resnet variants zagoruyko komodakis attacks constrained distance vary adversary perturbation budget name indicates resnet variants zagoruyko komodakis standard cnn tensorflow authors particular standard layer resnet width expansion wide resnet layers width set based best performing resnet zagoruyko komodakis tensorflow authors width indicates multiplicative factor number filters residual layer increased standard tensorflow abadi two convolutional layers followed normalization layer two fully connected layers weight decay trained steps trained steps trained steps benign training data models much accurate models trained batch size two resnets achieve close accuracy benenson test set hand achieves accuracy reflecting simple architecture complexity task etrics throughout paper use standard metrics characterize effectiveness various attack strategies mnist metrics attacks computed respect test set consisting samples metrics iterative attacks computed respect first samples test set data choose random samples test set attacks random samples iterative attacks evaluations targeted attacks choose target sample uniformly random set classification outputs except true class sample attack success rate main metric attack success rate fraction samples meets adversary goal xadv untargeted attacks xadv targeted attacks target szegedy alternative evaluation metrics discussed appendix average distortion also evaluate average distortion adversarial examples using average distance benign samples adversarial ones suggested rigazio xadv xadv number samples metric allows compare average distortion attacks achieve similar attack success rates therefore infer one stealthier number queries query based attacks make queries target model metric may affect cost mounting attack important consideration attacking systems costs associated number queries made uery based attacks radient stimation attack deployed learning systems often provide feedback input samples provided user given query feedback different adaptive algorithms applied adversaries understand system iteratively generate effective adversarial examples attack formal definitions attacks appendix initially explored number methods using query feedback carry attacks including particle swarm optimization kennedy simultaneous perturbation stochastic approximation spall however methods https effective finding adversarial examples reasons detailed section also contains results obtained given fact many attacks generating adversarial examples based gradient information tried directly estimating gradient carry attacks found effective range conditions words adversary approximate iterative fgsm attacks goodfellow kurakin using estimates losses needed carry attacks first propose gradient estimation attack based method finite differences spall drawback naive implementation finite difference method however requires queries per input dimension input leads explore methods random grouping features feature combination using components obtained principal component analysis pca reduce number queries threat model justification assume adversary obtain vector output probabilities input set queries adversary make note adversary access softmax probabilities able recover logits additive constant taking logarithm softmax probabilities untargeted attacks adversary needs access output probabilities two likely classes compelling reason assuming threat model adversary many existing cloudbased services allow users query trained models watson visual recognition clarifai google vision api results queries confidence scores used carry gradient estimation attacks trained models often deployed clients service mlaas providers liu thus adversary pose user mlaas provider create adversarial examples using attack used client provider comparing existing attacks results presented section compare attacks number existing attacks ther targeted untargeted case detailed descriptions attacks section particular compare following attacks make zero queries target model baseline attacks perturbations denoted rand section aligned perturbations denoted section transferability attack single local model section using fast gradient sign fgs iterative fgs ifgs samples generated single source model loss functions denoted transfer model transfer model transferability attack local model ensemble section using fgs ifgs samples generated source model loss functions transfer models transfer model model also compare attacks descriptions section results appendix inite difference method gradient estimation section focus method finite differences carry gradient estimation based attacks let function whose gradient estimated input function vector whose elements represented canonical basis vectors represented ith component everywhere else estimation gradient respect given fdx free parameter controls accuracy estimation approximation also used less accurate wright nocedal gradient function exists fdx finite difference method useful adversary aiming approximate gradient based attack since gradient directly estimated access function values pproximate fgs finite differences untargeted fgs method gradient usually taken respect loss true label input softmax probability vector loss network input log pfj log pfy index original class input gradient pfy pfy adversary query access softmax probabilities estimate gradient pfy plug get estimated gradient loss adversarial sample thus generated fdx pfy xadv sign pfy method generating adversarial samples denoted targeted adversarial samples generated using gradient estimation method fdx pft xadv sign pft targeted version method denoted stimating based loss also use loss function based logits found work well attacks carlini wagner loss function given max max represents ground truth label benign sample logits confidence parameter adjusted control strength adversarial perturbation confidence parameter set logit loss max max input correctly classified first term always greater incorrectly classified input untargeted attack meaningful carry thus loss term reduces max relevant inputs adversary compute logit values additive constant taking logarithm softmax probabilities assumed available threat model since loss function equal difference logits additive constant canceled finite differences method used estimate difference logit values original class second likely class one given untargeted adversarial sample generated loss case xadv sign similarly case adversary softmax probabilities adversarial sample xadv sign fdx similarly targeted adversarial sample xadv sign fdx max untargeted attack method denoted targeted version denoted mnist baseline model model baseline transfer model gradient estimation using finite differences rand iterative rand iterative transfer gradient estimation using finite differences iterative iterative table untargeted attacks entry attack success rate attack method given column model row number parentheses entry xadv average distortion samples used attack row entry bold represents attack best performance model gradient estimation using finite differences method performance matching attacks mnist constraint constraint mnist baseline model baseline model gradient estimation using finite differences iterative transfer model transfer gradient estimation using finite differences iterative iterative iterative table targeted attacks adversarial success rates number parentheses entry xadv average distortion samples used attack mnist terative attacks estimated gradients iterative variant gradient based attack described section powerful attack often achieves much higher attack success rates setting simple gradient based attacks thus stands reason version iterative attack estimated gradients also perform better attacks described iterative attack iterations using loss fdxtadv pfy xtadv xadv xadv sign pfy xtadv step size constraint set adversarial sample attack denoted logit loss used instead denoted valuation radient stimation using inite ifferences section summarize results obtained using gradient estimation attacks finite differences describe parameter choices made match attack adversarial success rates gradient estimation attack finite differences successful untargeted attack mnist models significantly outperforms attacks table closely tracks fgs logit loss mnist figure adversarial samples generated iteratively iterative gradient estimation attack finite differences achieves adversarial success rate across models datasets table used value mnist dataset dataset average distortion closely matches counterparts given table constrained strategies model constrained strategies xent logit logit transfer model fgs xent transfer model fgs logit fgs logit fgs xent adversarial success adversarial success xent logit logit transfer fgs xent fgs logit fgs xent model mnist figure effectiveness various single step attacks model mnist figures gives variation adversarial success increased successful attack strategy cases gradient estimation attack using finite differences logit loss coincides almost exactly fgs attack logit loss also gradient estimation attack query reduction using pca logit performs well datasets well achieve highest adversarial success rates targeted setting targeted attacks achieves adversarial success rates almost models shown results table achieves adversarial success rates matches performance attacks table average distortion samples generated using gradient estimation methods similar attacks parameter choices use datasets using find larger value needed xent loss based attacks work reason probability values used xent loss sensitive changes logit loss thus gradient estimated since function value change single pixel perturbed iterative gradient estimation attacks using finite differences use mnist results throughout parameters used iterative fgs attack results given appendix translates queries mnist steps iteration queries steps iteration per sample find choices work well keep running time gradient estimation attacks manageable level however find achieve similar adversarial success rates much fewer queries using query reduction methods describe next section uery reduction major drawback approximation based attacks number queries needed per adversarial sample large input dimension number queries exactly approximation may large input examine two techniques order reduce number queries adversary make techniques involve estimating gradient groups features instead estimating one feature time justification use feature grouping comes relation gradients directional derivatives hildebrand differentiable functions directional derivative function defined generalization partial derivative differentiable functions implies directional derivative projection gradient along direction thus estimating gradient grouping features equivalent estimating approximation gradient constructed projecting function along appropriately chosen directions estimated gradient computed using techniques plugged equations instead finite difference term create adversarial sample next introduce techniques applied group features estimation uery reduction based random grouping simplest way group features choose without replacement random set features gradient simultaneously estimated features size set chosen number queries adversary make reduces case partial derivative respect every feature found section iteration algorithm set indices according determined thus directional derivative estimated average partial derivatives thus quantity estimated gradient averaged version algorithm gradient estimation query reduction using random features input output estimated gradient initialize empty vector dimension choose set random indices initialize iff approximation directional set derivative along end initialize iff set uery reduction using pca components principled way reduce number queries adversary make estimate gradient compute directional derivatives along principal components determined principal component analysis pca shlens requires adversary access set data represetative training data pca minimizes reconstruction error terms norm provides basis euclidean distance original sample sample reconstructed using subset basis vectors smallest concretely let samples adversary wants misclassify column vectors matrix centered data samples pnand let principal components normalized eigenvectors sample covariance matrix xxt since positive semidefinite matrix decomposition orthogonal matrix diag thus algorithm matrix whose columns unit eigenvectors eigenvalue variance along ith component algorithm gradient estimation query reduction using pca components input output estimated gradient initialize ith column compute approximation directional derivative along update end set query reduction model none adversarial success adversarial success query reduction none adversarial success random feature groupings model gradient estimation attack query reduction using random grouping logit loss logit model mnist adversarial success rate decreases number groups decreased size group dimension input none gradient estimation attack query reduction using pca components logit loss geqr logit model mnist adversarial success rates decrease number principal components used estimation decreased relatively high success rates maintained even gradient estimation attack query reduction using pca components logit loss geqr logit relatively high success rates maintained even figure adversarial success rates gradient estimation attacks query reduction fdqr technique logit model mnist technique either pca none refers case number queries dimension input algorithm matrix whose columns principal components quantity estimated algorithm approximation gradient pca basis kui kui term left represents approximation true gradient sum projection along top principal components algorithm weights representation pca basis approximated using approximate directional derivatives along principal components terative attacks query reduction performing iterative attack gradient estimated using finite difference method equation could expensive adversary needing queries target model iterations finite difference estimation gradient lower number queries needed adversary use either query reduction techniques described reduce number queries attacks using loss denoted xent random grouping technique xent technique valuation radient stimation attacks query reduction section first summarize results obtained using gradient estimation attacks query reduction provide detailed analysis effect dimension attacks query reduction gradient estimation query reduction maintains high attack success rates datasets gradient estimation attack pca based query reduction logit effective performance close mnist figure figure iterative gradient estimation attacks random grouping pca based query reduction logit logit achieve close success rates untargeted attacks targeted attacks model mnist figure figure clearly shows effectiveness gradient estimation attack across models datasets adversarial goals random grouping effective pca based method attacks effective iterative attacks effect dimension gradient estimation attacks consider effectiveness gradient estimation random grouping based query reduction logit loss logit model mnist data figure number indices chosen iteration algorithm thus increases number groups decreases expect attack success decrease gradients larger groups features averaged effect see figure adversarial success rate drops increases grouping translates queries per mnist image thus order achieve high adversarial success rates random grouping method larger perturbation magnitudes needed hand approach logit much effective seen figure using principal components estimate gradient model mnist algorithm adversarial success rate compared without query reduction similarly using principal components figure adversarial success rate achieved adversarial success rate rises remarks decreasing number queries reduce attack success rates proposed query reduction methods maintain high attack success rates dversarial samples figure show examples successful untargeted adversarial samples model mnist images generated constraint mnist clearly amount perturbation added iterative attacks much smaller barely visible images fgs mnist ifgs mnist iterative figure untargeted adversarial samples model mnist attacks use logit loss perturbations images generated using attacks far smaller iterative attacks mnist classified attacks iterative attacks dog classified bird fgs finite difference attack frog gradient estimation attack query reduction fficiency gradient estimation attacks evaluations models run gpu batch size model mnist data attacks take seconds per sample respectively thus attacks carried entire mnist test set images minutes iterative attacks query reduction iterations per sample set taking seconds per sample similarly attack attack success queries time per sample finite diff gradient estimation iter finite diff iter gradient estimation particle swarm optimization spsa table comparison untargeted attack methods results attacks using first samples mnist dataset model constraint logit loss used methods expect pso uses class probabilities take seconds per sample query reduction using logit time taken seconds per sample dataset take roughly per sample iterative variants attacks iterations set take roughly per sample using query reduction logit iterations takes per sample time required per sample increases complexity network observed even attacks numbers case queries made parallel attack algorithm allows queries made parallel well find simple parallelization queries gives speedup limiting factor fact model loaded single gpu implies current setup fully optimized take advantage inherently parallel nature attack optimization greater speedups achieved remarks overall attacks efficient allow adversary generate large number adversarial samples short period time based attacks experimented particle swarm optimization pso commonly used evolutionary optimization strategy construct adversarial samples done sharif found prohibitively slow large dataset unable achieve high adversarial success rates even mnist dataset also tried use simultaneous perturbation stochastic approximation spsa method similar method finite differences estimates gradient loss along random direction step instead along canonical basis vectors step spsa requires queries target model large number steps nevertheless required generate adversarial samples single step spsa reliably produce adversarial samples two main disadvantages method convergence spsa much sensitive practice choice gradient estimation step size loss minimization step size even number queries gradient estimation attacks attack success rate lower even though distortion higher comparative evaluation attacks experimented mnist dataset given table pso based attack uses class probabilities define loss function found work better logit loss experiments attack achieves best speed attack success logit attacking defenses section evaluate attacks different defenses based adversarial training variants focus adversarial training based defenses aim directly improve robustness dnns among effective defenses demonstrated far literature using freely available code http logit across models mnist iterative untargeted targeted iterative adversarial success adversarial success rates untargeted attacks adversarial success rates iterative grausing query reduction parameters used query dient estimation attack using logit loss reduction indicated table pca used query reduction figure adversarial success rates attacks set model mnist model adversarially trained variants ref section models attacks use roughly queries per sample datasets find iterative gradient estimation attacks perform much better attack defenses nevertheless figure show addition initial random perturbation overcome gradient masking gradient estimation attack finite differences effective attack adversarially trained models mnist dataset model benign adv mnist table accuracy models benign test data background evaluation setup dversarial training szegedy goodfellow introduced concept adversarial training standard loss function neural network modified follows xadv true label sample underlying objective modification make neural networks robust penalizing training count adversarial samples training adversarial samples computed respect current state network using appropriate method fgsm ensemble adversarial training proposed extension adversarial training paradigm called ensemble adversarial training name suggests ensemble adversarial training network trained adversarial samples multiple networks iterative adversarial training modification adversarial training paradigm proposes training adversarial samples generated using iterative methods iterative fgsm attack described earlier madry dversarially trained models train variants model adversarial training strategies described using adversarial samples based constraint model trained fgs samples model trained iterative fgs samples using model ensemble training model trained fgs samples models well fgs samples source samples chosen randomly mnist model model model model model baseline gradient estimation using finite differences rand baseline transfer model iterative transfer gradient estimation using finite differences rand iterative iterative iterative table untargeted attacks models adversarial training adversarial success rates average distortion xadv samples mnist constrained strategies model constrained strategies xent logit transfer model fgs xent transfer model fgs logit fgs logit fgs xent adversarial success adversarial success xent logit transfer fgs xent fgs logit fgs xent model mnist figure effectiveness various single step attacks adversarially trained models mnist model model attack highest performance till gradient estimation attack using finite differences initially added randomness beyond transferability attack single local model using samples model performs better model best performing attack transferability attack single local model using samples minibatch training adversarially trained models training batch contains samples benign adversarial samples either fgsm iterative fgsm implies loss weighted equally training set networks using standard ensemble adversarial training trained epochs using iterative adversarial training trained epochs train variants using adversarial samples constraint trained fgs samples constraint trained fgs samples well fgs samples trained iterative fgs samples using adversarial variants trained steps table shows accuracy models various defenses benign test data ingle step attacks defenses figure see attacks much lower adversarial success rates model compared model success rate gradient estimation attacks matches attacks adversarially trained networks well overcome add initial random perturbation samples using gradient estimation attack finite differences logit loss effective single step attacks model adversarial success rate surpassing transferability attack single local model figure see gradient estimation attacks using finite differences fgs attacks increased attacks perform best random perturbations rand transferability attack single local model latter performing slightly better baseline attacks due gradient masking phenomenon overcome adding random perturbations mnist interesting effect observed model variants adversarial success logit model fgs model logit model fgs model logit model fgs model figure increasing effectiveness attacks model model model mnist adding initial constrained random perturbation magnitude adversarial success rate higher likely explanation effect model overfitted adversarial samples gradient estimation attack closely tracks adversarial success rate attacks setting well increasing effectiveness attacks using initial random perturbation since gradient estimation attack finite differences performing well due masking gradients benign sample added initial random perturbation escape region attack figure shows effect adding initial perturbation magnitude addition random perturbation much improved adversarial success rate model going without perturbation total perturbation value even outperforms fgs random perturbation added effect also observed model model appears resistant gradient based attacks thus attacks work well attacks dnns standard ensemble adversarial training achieve performance levels close attacks terative attacks different adversarial training defenses attacks less effective lower one used training experiments show iterative attacks continue work well even adversarially trained networks example iterative gradient estimation attack using finite differences logit loss achieves adversarial success rate model best transferability attack success rate comparable attack success rate table however model quite robust even iterative attacks highest attack success rate achieved figure see using queries per sample iterative gradient estimation attack using pca query reduction logit achieves untargeted targeted adversarial success rates model methods far outperform attacks shown table iterative attacks perform well adversarially trained models well achieves attack success rates table reduces slightly logit used matches performance attacks given table logit also achieves success rate targeted attacks shown figure iteratively trained model poor performance benign well adversarial samples accuracy benign data shown table iterative gradient estimation attack using finite differences loss achieves untargeted attack success rate model lower adversarially trained models still significant line observation madry iterative adversarial training needs models large capacity effective highlights limitation defense since clear model capacity needed models use already large number parameters remarks iterative variants gradient estimation attacks outperform attacks achieving attack success rates close attacks even adversarially trained models attacks larifai real world system since requirement carrying gradient estimation based attacks access target model number deployed public systems provide classification service used evaluate methods choose clarifai number models trained classify image datasets variety practical applications provides access models returns confidence scores upon querying particular clarifai models used detection safe work nsfw content well content moderation important applications presence adversarial samples presents real danger attacker using query access model could generate adversarial sample longer classified inappropriate example adversary could upload violent images adversarially modified marked incorrectly safe content moderation model evaluate attack using gradient estimation method clarifai nsfw content moderation models important point note given lack easily accessible dataset tasks train local surrogate model carrying attacks based transferability challenging task hand attack directly used image adversary choice content moderation model five categories safe suggestive explicit drug gore nsfw model two categories sfw query api image returns confidence scores associated category confidence scores summing use random grouping technique order reduce number queries take logarithm confidence scores order use logit loss large number successful attack images found https due possibly offensive nature included paper example attack content moderation api given figure original image left clearly kind drug table spoon syringe classified drug content moderation model confidence score image right adversarial image generated queries content moderation api constraint perturbation image still clearly classified human drugs table content moderation model classifies safe confidence score remarks proposed gradient estimation attacks successfully generate adversarial examples misclassified system hosted clarifai without prior knowledge training set model xisting black box attacks section describe existing methods generating adversarial examples attacks adversary perturbation constrained using distance baseline attacks describe two baseline attacks carried without knowledge query access target model andom perturbations knowledge training set simplest manner adversary may seek carry attack adding random perturbation input szegedy goodfellow fawzi perturbations generated distribution adversary choice constrained according appropriate norm let distribution random variable drawn according noisy sample xnoise since random noise added possible generate targeted adversarial samples principled manner attack denoted rand throughout ifference means perturbation aligned difference means two classes likely effective adversary hoping cause misclassification broad range classifiers perturbations far optimal dnns provide useful baseline compare adversaries least partial access training test sets carry attack adversarial sample generated using method constraints xadv sign mean target class mean original ground truth class untargeted attack argmini appropriately chosen distance function words class whose mean closest original class terms euclidean distance chosen target attack denoted throughout ffectiveness baseline attacks baseline attacks described choice distribution random perturbation attack choice distance function difference means attack fixed describe choices make attacks random perturbation sample mnist chosen independently according multivariate normal distribution mean depending norm constraint either signed scaled version random perturbation scaled unit vector direction perturbation added untargeted attack utilizing perturbations aligned difference means sample mean class closest original class distance determined expected adversarial samples generated using rand achieve high adversarial success rates table spite similar larger average distortion attacks mnist models however method quite effective higher perturbation values mnist dataset seen figure also models attack effective method less effective targeted attack case model outperforms transferability based attack considerably success rate comparable targeted transferability based attack model well relative effectiveness two baseline methods reversed dataset however rand outperforms considerably increased indicates models trained mnist normal vectors decision boundaries aligned vectors along difference means compared models ingle step terative fast radient ethods describe two attack methods used attacks constructed approximate versions section attacks based either iterative gradient based minimization appropriately defined loss functions neural networks results attacks contained appendix since methods require knowledge model gradient assume adversary access local model adversarial samples generated transferred target model carry attack papernot ensemble local models liu may also used attacks described section fast gradient method first introduced goodfellow utilizes firstorder approximation loss function order construct adversarial samples adversary surrogate local model samples constructed performing single step gradient ascent untargeted attacks formally adversary generates samples xadv constraints known fast gradient sign fgs method untargeted attack setting xadv sign loss function respect gradient taken loss function typically used loss goodfellow adversarial samples generated using targeted fgs attack xadv sign target class iterative fast gradient methods simply variants fast gradient method described kurakin gradient loss added sample iterations starting benign sample updated sample projected satisfy constraints every step adv xadv sign xadv iterative fast gradient methods thus essentially carry projected gradient descent pgd goal maximizing loss pointed madry targeted adversarial samples generated using iterative fgs adv xadv sign xadv eyond cross entropy loss prior work carlini wagner investigates variety loss functions attacks based minimization appropriately defined loss function experiments neural networks untargeted attacks use loss function based logits found work well attacks carlini wagner loss function given max max represents ground truth label benign sample logits confidence parameter adjusted control strength adversarial sample targeted adversarial samples generated using following loss term xadv sign max ransferability based attacks describe attacks assume adversary access representative set training data order train local model one earliest observations regards adversarial samples neural networks transfer adversarial attack samples generated one network also adversarial another network observation directly led proposal attack adversary would generate samples local network transfer target model referred transferability based attack targeted transferability attacks carried using locally generated targeted adversarial samples ingle local model attacks use surrogate local model craft adversarial samples submitted order cause misclassification existing attacks based transferability single local model papernot different attack strategies generate adversarial instances introduced section used generate adversarial instances attack nsemble local models since clear local model best suited generating adversarial samples transfer well target model liu propose generation adversarial examples ensemble local models method modifies existing transferability attacks substituting sum loss functions place loss single local model concretely let ensemble local models usedp generate local loss ensemble loss computed ens weight given model ensemble fgs attack ensemble setting becomes xadv sign ens iterative fgs attack modified similarly liu show transferability attack local model ensemble performs well even targeted attack case transferability attack single local model usually effective untargeted attacks intuition one model gradient may adversarial target model likely least one gradient directions ensemble represents direction somewhat adversarial target model untargeted transferability model source iterative targeted transferability model iterative source xent logit xent logit table adversarial success rates attacks model mnist numbers parentheses beside entry give average distortion xadv test set table compares effectiveness using single local model generate adversarial examples versus use local ensemble ransferability attack results transferability experiments choose transfer model mnist dataset dataset models similar least one models respective dataset different one others also fairly representative instances dnns used practice adversarial samples generated using methods transferred model models higher success rates untargeted attacks generated using logit loss compared cross entropy loss seen table iterative adversarial samples however untargeted attack success rates roughly loss functions observed adversarial success rate targeted attacks transferability much lower untargeted case even iteratively generated samples used mnist dataset highest targeted transferability rate table compared untargeted case table one attempt improve transferability rate use ensemble local models instead single one results mnist data presented table general untargeted targeted transferability increase ensemble used however increase monotonic number models used ensemble see transferability rate samples falls sharply model added ensemble may due different architecture compared models thus also different gradient directions highlights one pitfalls transferability important use local surrogate model similar target model achieving high attack success rates onclusion overall paper conduct systematic analysis new existing attacks classifiers defenses propose gradient estimation attacks achieve high attack success rates comparable even attacks outperform attacks apply random grouping pca based methods reduce number queries required small constant demonstrate effectiveness gradient estimation attack even setting also apply attack classifier defenses results show gradient estimation attacks extremely effective variety settings making development better defenses attacks urgent task eferences abadi ashish agarwal paul barham eugene brevdo zhifeng chen craig citro greg corrado andy davis jeffrey dean matthieu devin sanjay ghemawat ian goodfellow andrew harp geoffrey irving michael isard yangqing jia rafal jozefowicz lukasz kaiser manjunath kudlur josh levenberg dan rajat monga sherry moore derek murray chris olah mike schuster jonathon shlens benoit steiner ilya sutskever kunal talwar paul tucker vincent vanhoucke vijay vasudevan fernanda oriol vinyals pete warden martin wattenberg martin wicke yuan xiaoqiang zheng tensorflow machine learning heterogeneous systems url http software available rodrigo benenson classification datasets results http accessed nicholas carlini david wagner towards evaluating robustness neural networks ieee symposium security privacy chen huan zhang yash sharma jinfeng hsieh zoo zeroth order optimization based attacks deep neural networks without training substitute models arxiv preprint clarifai clarifai image video recognition api https accessed hung dang huang yue chang evading classifiers morphing dark alhussein fawzi omar fawzi pascal frossard analysis classifiers robustness adversarial perturbations arxiv preprint ian goodfellow yoshua bengio aaron courville deep learning mit press ian goodfellow jonathon shlens christian szegedy explaining harnessing adversarial examples international conference learning representations google vision api vision api image content analysis google cloud platform https accessed shixiang luca rigazio towards deep neural network architectures robust adversarial examples arxiv preprint kaiming xiangyu zhang shaoqing ren jian sun deep residual learning image recognition proceedings ieee conference computer vision pattern recognition francis begnaud hildebrand advanced calculus applications volume englewood cliffs james kennedy particle swarm optimization encyclopedia machine learning springer alex krizhevsky geoffrey hinton learning multiple layers features tiny images alexey kurakin ian goodfellow samy bengio adversarial examples physical world arxiv preprint yann lecun corrina cortes mnist database handwritten digits amy liu clarifai featured hack block unwanted nudity blog comments disqus https accessed yanpei liu xinyun chen chang liu dawn song delving transferable adversarial examples attacks iclr alhussein fawzi pascal frossard deepfool simple accurate method fool deep neural networks arxiv preprint alhussein fawzi omar fawzi pascal frossard universal adversarial perturbations arxiv preprint konda reddy mopuri utsav garg venkatesh babu fast feature fool data independent approach universal adversarial perturbations arxiv preprint aleksander madry aleksandar makelov ludwig schmidt dimitris tsipras adrian vladu towards deep learning models resistant adversarial attacks stat june nina narodytska shiva prasad kasiviswanathan simple adversarial perturbations deep networks arxiv preprint blaine nelson benjamin rubinstein ling huang anthony joseph steven lee satish rao tygar query strategies evading classifiers journal machine learning research nicolas papernot patrick mcdaniel ian goodfellow transferability machine learning phenomena attacks using adversarial samples arxiv preprint nicolas papernot patrick mcdaniel ian goodfellow somesh jha berkay celik ananthram swami practical attacks deep learning systems using adversarial examples proceedings acm asia conference computer communications security mahmood sharif sruti bhagavatula lujo bauer michael reiter accessorize crime real stealthy attacks face recognition proceedings acm sigsac conference computer communications security acm jonathon shlens tutorial principal component analysis arxiv preprint james spall multivariate stochastic approximation using simultaneous perturbation gradient approximation ieee transactions automatic control james spall introduction stochastic search optimization estimation simulation control volume john wiley sons christian szegedy wojciech zaremba ilya sutskever joan bruna dumitru erhan ian goodfellow rob fergus intriguing properties neural networks international conference learning representations tensorflow authors tensorflow resnet models https accessed tensorflow authors tensorflow tutorial model https accessed florian alexey kurakin nicolas papernot dan boneh patrick mcdaniel ensemble adversarial training attacks defenses arxiv preprint florian nicolas papernot ian goodfellow dan boneh patrick mcdaniel space transferable adversarial examples arxiv preprint watson visual recognition watson visual recognition https accessed stephen wright jorge nocedal numerical optimization springer science weilin yanjun david evans automatically evading classifiers proceedings network distributed systems symposium sergey zagoruyko nikos komodakis wide residual networks arxiv preprint lternative adversarial success metric note adversarial success rate also computed considering fraction inputs meet adversary objective given original sample correctly classified one would count fraction correctly classified inputs xadv untargeted case xadv targeted case sense fraction represents samples truly adversarial since misclassified solely due adversarial perturbation added due classifier failure generalize well practice methods measuring adversarial success rate lead similar results classifiers high accuracy test data ormal definitions based attacks provide unified framework assuming adversary make active queries model existing attacks making zero queries special case framework given input instance adversary makes sequence queries based adversarial constraint set iteratively adds perturbations desired query results obtained using corresponding adversarial example xadv generated formally define targeted untargeted attacks based framework definition untargeted attack given input instance iterative active query attack strategy query sequence generated qfi denotes ith corresponding query result set attack untargeted adversarial example xadv satisfies xadv number queries made definition targeted attack given input instance iterative active query attack strategy query sequence generated qfi denotes ith corresponding query result set attack targeted adversarial example xadv satisfies xadv target class number queries made respectively case adversary makes queries target classifier special case refer attack literature number attacks carried varying degrees success papernot liu mopuri abbreviation untargeted targeted rand transfer model transfer model transfer model transfer model logit logit logit logit surrogate attack fgs steps ifgs iterative gradient estimation query based attack transfer model steps iterative gradient estimation technique random grouping pca fast gradient sign fgs steps iterative iterative steps iterative loss function loss function loss function loss function table attacks evaluated paper ummary attacks evaluated taxonomy attacks deepen understanding effectiveness attacks work propose taxonomy attacks intuitively based number queries target model used attack details provided table evaluate following attacks summarized table attacks baseline attacks perturbations rand aligned perturbations transferability attack single local model using fast gradient sign fgs iterative fgs ifgs samples generated single source model loss functions transfer model transfer model transferability attack local model ensemble using fgs ifgs samples generated source model loss functions transfer models transfer model model query based attacks iterative attacks gradient estimation attack loss functions gradient estimation iterative gradient estimation query reduction attacks loss using two query reduction techniques random grouping principal component analysis components pca logit fgs ifgs attacks loss functions loss hite box attack results section present attack results various cases tables relevant results match previous work goodfellow kurakin mnist model iterative fgs xent fgs logit ifgs xent ifgs logit model fgs xent fgs logit iterative ifgs xent ifgs logit table untargeted attacks adversarial success rates average distortion xadv test set mnist mnist model iterative fgs xent fgs logit ifgs xent ifgs logit model fgs xent fgs logit iterative ifgs xent ifgs logit table targeted attacks adversarial success rates average distortion xadv test set mnist mnist model model model model iterative fgs xent fgs logit ifgs xent ifgs logit model fgs xent fgs logit iterative ifgs xent ifgs logit table untargeted attacks models adversarial training adversarial success rates average distortion xadv test set mnist
1
collaborative dense reconstruction online pose optimisation stuart victor prisacariu tommaso david murray nicholas philip torr jan department engineering science university oxford smg tommaso nicklord victor dwm phst abstract reconstructing dense volumetric models scenes important many tasks capturing large scenes take significant time risk transient changes scene goes capture time increases good reasons want instead capture several smaller joined make whole scene achieving traditionally difficult joining may never viewed angle requires relocaliser cope novel poses tracking drift prevent joined make consistent overall scene recent advances mobile hardware however significantly improved ability capture little tracking drift moreover highquality regression relocalisers recently made practical introduction method allow trained used online paper leverage advances present knowledge first system allow multiple users collaborate interactively reconstruct dense models whole buildings using system entire house lab captured reconstructed half hour using hardware figure globally consistent collaborative reconstruction house produced approach reconstruction involved relocalising separate sequences priory subset dataset pensating tracking drift however even sophisticated approaches capturing data needed reconstruct large scene whole building take significant time planning require considerable concentration part user moreover risk transient changes scene people moving around goes capture time increases corrupting model forcing user restart capture thus good reasons want split capture several shorter sequences captured either multiple sessions parallel multiple users joined make whole scene achieving traditionally difficult joining requires ability accurately determine relative transformations camera introduction reconstructing dense volumetric models scenes important task computer vision robotics applications content creation films games augmented reality cultural heritage preservation building information modelling since seminal kinectfusion work newcombe demonstrated reconstruction scene real time using sensor huge progress made increasing size scene able reconstruct golodetz cavallari lord assert joint first authorship figure examples approach showing flat house lab subsets dataset calisation problem even though areas overlap may never viewed angles tracking drift prevent joined make consistent overall scene recent advances mobile hardware however introduction augmented reality smartphones estimate pose using odometry significantly improved ability capture subscenes little drift moreover relocalisation relocalisers random ferns previously widely used relocalisation context unable relocalise novel poses recently giving way methods score forests driven recent work showed could trained used online unlike methods approaches shown much better suited relocalisation novel poses critical aligning captured different angles paper leverage advances present knowledge first system allow multiple users collaborate interactively reconstruct dense models whole buildings unlike previous mapping approaches approach able reconstruct detailed globally consistent dense models using system entire house lab captured reconstructed half hour using consumergrade hardware see integrated approach semanticpaint framework making easy existing semanticpaint infinitam users benefit work constructed new dataset validate approach make code dataset available online figures show reconstructions produced approach approaches divided two main categories related work centralised approaches contrast take advantage computing power provided one central servers ability communicate agents produce detailed globally consistent maps example chebrolu described approach based monocular although previous work mapping focused dense reconstruction mapping rich research history computer vision robotics several good surveys exist existing decentralised approaches eschew use central server instead produce local map scene agent often transmitting local maps agents meet share knowledge parts scene individual agents yet visited example cunningham proposed approach called ddfsam robot produces map shares compressed timestamped versions neighbouring robots extended registered maps together using approach based delaunay triangulation ransac lab later proposed avoids repeated expensive recreation combined neighbourhood map use cieslewski presented sophisticated decentralised collaborative mapping based distributed version control choudhary described approach performs decentralised mapping using maps decreasing bandwidth required map sharing depending existence objects scene recently cieslewski aimed minimise bandwidth robot uses relocalisation first establishing robots relevant information communicating best robot found decentralised approaches numerous applications including search rescue agricultural robotics planetary exploration underwater mapping limited computing power tends available mobile agents existing approaches target robustness unreliable network connections mechanical failures rather reconstructing detailed scene geometry limiting usefulness tasks like building information modelling cultural heritage preservation reads frames mapping component updates voxel scene agent mapping server local relocaliser accesses scenes mapping component agent scene renderer uses voxel scene reads poses local relocaliser reads trajectories accesses scenes relocalisers pose graph optimiser trajectory candidate relocalisation selector feeds relocaliser operates pose graph adds relative transform samples trajectory considers existing clusters figure architecture system individual agents whether local remote track poses feed posed frames mapping server separate mapping component instantiated agent reconstructs voxel scene trains local relocaliser separately candidate relocalisation selector repeatedly selects pose one agents trajectories relocalisation another scene relocaliser uses scene renderer render synthetic rgb depth images corresponding scene selected pose passes local relocaliser target scene relocalisation succeeds verified see sample relative transform two scenes recorded relative transform samples scene pair clustered robustness see whenever cluster sample added sufficiently large construct pose graph blending relative poses largest clusters trigger pose graph optimisation optimised poses used rendering overall scene clients produced maps keyframes sent central server optimisation riazuelo described distributed visual slam system inspired ptam clients performed tracking expensive mapping steps performed cloud mohanarajah described another approach based authors rapyuta robotics platform client robots estimated local poses running dense visual odometry images primesense camera keyframes centrally optimised using forster demonstrated centralised collaborative mapping micro aerial vehicles mavs equipped monocular camera imu recently schmuck chli shown incorporate feedback collaborative approach allowing agents share information approaches fit cleanly either category reid described distributed approach multiple autonomous ground robots controlled centralised ground control station gcs able operate independently connection gcs failed mcdonald described stereo approach single agent reconstructed scene multiple sessions effectively collaborating time chen described approach initially robot building pose graph independently storing sensor data exchanging information robots rendezvous later robots transferring pose graphs sensor data central server pose graph optimisation building coloured point cloud map fankhauser described approach allowed asctec firefly hexacopter quadrupedal ground robot work together help ground robot navigate safely whilst fixedposition central server used ground robot significant computing power effectively played role server performing bundle adjustment data received hexacopter maintain globally consistent map approach since target scenarios building information modelling hardware failure minor concern adopt centralised approach number lightweight mobile clients powerful central server laptop desktop one gpus see figure client estimates accurate local poses sequence frames see transmits frames poses central server see server constructs trains relocaliser client relocalises different clients establish consistent global map see system run either interactive batch mode batch mode sent across server simply read directly disk relocalisation performed interactive mode server start relocalising clients immediately new clients join fly contribute map local tracking mentioned client system must estimate accurate local poses sequence frames transmit frames poses central server since traditional pose estimation approaches particularly based purely visual tracking tended subject significant tracking drift particularly larger scales collaborative mapping common solution clients simply transmit inaccurate local poses rely server perform global optimisation pose graph optimisation keyframes bundle adjustment achieve globally consistent map optimised poses sent back clients desired however global optimisations scale well limiting overall size map constructed local poses clients corrected global optimisations server finish meaning much time fully trusted system instead leverage recent developments augmented reality hardware place burden accurate local pose estimation clients freeing server focus reconstructing dense global map real time particular use straightforward client captures images poses provided hardware based odometry transmits server see important choice allows significantly simplify design server see reduce memory consumption making possible support agents larger global maps network bandwidth clients use transmit frames tracked successfully compress depth images png format lossless rgb images jpg format lossy moreover maintain smooth interactive user experience client transmit messages containing frames accompanying poses server separate thread iteratively reads frame message pooled queue reusable messages compresses sends main thread writes uncompressed frame messages queue based current input discard messages would overflow queue network slow keep client maintain interactivity bound client memory usage way interacts compression strategy evaluated supplementary material server end client handler running separate thread maintains pooled queue uncompressed frame messages compressed frame message arrives immediately uncompressed pushed onto queue client discard messages would overflow queue main thread run mapping component client see reads frames accompanying poses client queue necessary creates local map global mapping frame transmission client server server two jobs constructing training relocaliser client determining relative transforms establish global coordinate system achieve first separate mapping component client runs opensource infinitam reconstruction engine incoming posed frames construct map trains regression relocaliser online per cavallari achieve second server attempts relocalise synthetic images one agent using another agent relocaliser find estimates relative transform subscenes see samples clustered transformation space help suppress outliers pose graph constructed optimised background refine relative transforms see optimisation inspired approach showed build globally consistent models dividing scene small optimising relative poses however construct pose graph agent represented single node edges denote relative transforms different agents differs came one agent relocalisation use tcp client server guarantee delivery frames minimise details design pooled queue data structure found supplementary material needed establish transforms relocalisation maintain smooth interactive experience server attempt relocalisations different clients separate thread available separate gpu way relocalisation attempts scheduled depends mode server running batch mode relocalisations attempted client fully created point attempted quickly possible new relocalisation attempt scheduled soon previous attempt finishes interactive mode relocalise whilst client still reconstructed every frames relocalisers trained online used simultaneously need space relocalisation attempts allow sufficient time attempts relocalisers trained schedule attempt first randomly generate list candidate relocalisations candidate denotes attempt relocalise frame scene scene coordinate system balance different pairs scenes may reconstructed varying numbers frames first uniformly sample scene pair set scene pairs uniformly sample frame index scene generated candidate scored via aims give boost candidates might connect new nodes pose graph defined one already optimised global pose otherwise penalises candidates add relative transform sample pair already confidently relocalised respect max max threshold number relocalisations needed become confident relative transform pair correct penalises candidates whose frame local pose scene close within one already tried scene use penalty poses close otherwise scored candidates schedule relocalisation attempt candidate maximum score proceed shown figure let denote pose frame coordinate system first render synthetic rgb depth raycasts known pose call try relocalise using relocaliser obtain estimated pose coordinate system verify estimated pose first render synthetic depth raycast compute masked absolute depth difference image via otherwise ranges domain compute add relative transform sample iff corresponding effectiveness verification step evaluated pose optimisation incrementally cluster relative transform samples add pair prior performing pose graph optimisation suppress effect outliers final result adding sample look see cluster sample added samples sample contributed confident relative transform worthwhile run pose graph optimisation since pose graph construct may changed since last run construct pose graph first compute binary relation true iff largest cluster size confident relative transform next compute reflexive transitive closure relation true iff chain possibly empty confident relative transforms finally denote scene corresponding first agent primary scene add pose graph node scene confidently connected primary scene determine edges add graph filter list containing pair pairs whose largest cluster size confidently connected primary scene surviving pair blend relative transform samples largest cluster using dual quaternion blending form overall estimate relative transform add edge graph adding one edge scene pair keep pose graph optimising small allowing optimisation run repeatedly background optimise overall map goal optimising pose graph find optimised global pose scene node perform optimisation use approach implemented opensource infinitam reconstruction engine uses failed renderer verifier renderer rejected figure relocalise scene agent another agent first choose arbitrary frame trajectory render synthetic rgb depth raycasts scene frame pose try relocalise using relocaliser either fails produces estimated pose frame coordinate system pose proposed verify rendering synthetic depth raycast scene proposed pose comparing synthetic depth raycast scene accept pose iff two depth raycasts sufficiently similar see minimise error function denotes concatenation three imaginary components quaternion representing rotational part translational part implicitly optimisation trying achieve every ensure optimised global poses scenes consistent estimated relative poses experiments perform quantitative qualitative experiments evaluate approach evaluate quality reconstruction obtain collaborative approach comparing variety measurements made combined mesh scene measurements made laser range finder real world evaluate effectiveness depth differencebased verification approach see showing verifier able prune large numbers incorrect relative transforms whilst rejecting practically correct transforms demonstrate scalability approach showing using synthetic images relocalisation discarding data relocalisers use longer needed support agents evaluate strategy using synthetic scene raycasts relocalisation rather real images original sequences testing approaches standard relocalisation dataset shotton finally time long approach takes produce consistent reconstructions four different subsets dataset supplementary material contains analysis reconstruction quality demonstrate ability achieve collaborative reconstructions combined sequences flat subset dataset reconstruct combined map flat see figure since reconstruction flat available access lidar scanner obtain one validated reconstruction comparing variety measurements made combined map groundtruth measurements made laser range finder bosch professional glm real world achieve first converted maps produced infinitam maps using marching cubes applied relative transforms estimated relocalisation process meshes transform common coordinate system finally imported transformed meshes meshlab used measurement tool make measurements shown figure found measurements figure gpu memory use collaborative reconstruction priory sequences dataset figure example reconstructions achieve using collaborative approach joined flat sequences dataset make combined map flat purple lines show measurements performed using laser range finder combined mesh model validate approach ordering numbers range finder mesh reconstructed model consistently within ground truth indicating able achieve reconstructions correspond relatively well geometry examples time showing collaborative reconstructions houses large research lab found supplementary material effectiveness depth difference verification mentioned verify proposed interagent relocalisation rendering synthetic depth image scene proposed pose comparing synthetic depth image scene passed relocaliser masking pixels valid depth images evaluate effectiveness approach first took pairs sequences dataset able successfully relocalise respect normal operation approach recorded relative transform ground truth transform later use attempted relocalise every frame scene using relocaliser scene next frame relocaliser proposed relative transform ran verification step proposed transform thereby classifying either verified rejected compared proposed transform ground truth transform classifying correct within ground truth incorrect otherwise finally counted transforms true positives transforms false positives similarly true negatives false negatives shown table results show verifier extremely high average recall rate meaning largely manages avoid rejecting correct transforms also reasonably good average specificity meaning fairly good pruning number incorrect transforms need deal however fairly large number incorrect transforms still manage pass verification stage mentioned dealt later clustering transforms making use transforms largest cluster additional effects clustering step shown supplementary material scalability batch mode evaluate scalability approach perform separate experiments batch interactive modes system since target slightly different applications interactive experiment supplementary material batch mode experiment focuses server gpu memory usage number agents increases evaluate performed collaborative reconstruction priory sequences dataset represent threestory house see supplementary material maximise number agents able handle added new server one time deleted training data used relocaliser fully trained limited maximum memory used relocalisers leaving bounded primarily size reconstructed voxel scenes figure shows final gpu memory use relocaliser meaning potentially handle around relocalisers gpu nvidia titan actual number may slightly less due driver overhead moreover since relocalisers use structure scene could potentially merged produce global relocaliser reduce memory usage still memory used voxel scenes currently bottleneck preventing scene scene total frames relocalised frames verifier performance precision recall specificity average scene pairs table evaluating effectiveness depth difference verification perform proposed relocalisations pairs see attempt relocalise every frame using record number frames able relocalise together statistics many relocalisations verifier ing approach agents currently scene takes memory limiting around agents titan assuming relocalisers stored secondary gpu scale could reduce memory used meshing scene marching cubes discarding voxel maps relocalisation synthetic images per relocalise scenes different agents using synthetic images rather real frames originally captured agents avoids prohibitive memory cost storing frames acquired agent ram easily hundreds mbs per agent however might expect using synthetic images lower relocalisation performance since train relocalisers scene using real input frames verify problem compared results able obtain using synthetic images results obtained using approach cavallari real images standard dataset unlike used modes leaf regression forests used modes since found gave better results cases test synthetic approach first reconstructed sequence real training images normal rendered synthetic frames testing poses rather using testing images dataset table shows results using synthetic images least good cases actually higher results using real images verifying using synthetic images decrease relocalisation performance practice likely due fact rendering synthetic images reconstructed scene implicitly remove noise frames relocalised improving pose estimation accuracy timings evaluate long takes produce consistent reconstructions using approach computed sequence real images synthetic images chess fire heads office pumpkin redkitchen stairs table comparing relocalisation results obtained rendering synthetic images scenes dataset test poses obtained using real test images adapting regression forest office percentages denote proportions test frames translation error angular error flat priory house lab sequences longest sequence frames longest sequence time average mapping time average total time table times taken collaboratively reconstruct four different subsets dataset see age times taken collaboratively reconstruct four different subsets dataset see table computed time taken capture sequences subset times length longest sequence assuming parallel capturing average mapping time time taken relocalise agents compute optimised global poses maps account random selection frames relocalise globally mapped subset times reported average time table average total time start capturing process output globally consistent map half hour subsets tested conclusion paper shown collaboratively reconstruct dense volumetric models scenes using multiple agents existing collaborative mapping approaches traditionally suffered inability trust local poses produced mobile agents forcing perform costly global optimisations server ensure consistent map limiting ability perform dense volumetric mapping collaboratively leveraging recent mobile hardware advances construct rigid local need refinement joining using regressionbased relocaliser avoid expensive global optimisations opting refine relative poses individual agents overall maps system allows multiple users collaboratively reconstruct consistent dense models entire buildings half hour using consumergrade hardware making easier ever users capture detailed scene models scale acknowledgements work supported innovate project streetwise epsrc erc grant epsrc grant seebibyte grant would also like thank manar marzouk maria anastasia tsitsigkou help collaborative dataset collection upplementary aterials dataset dataset comprises different subsets flat house priory lab containing number different sequences successfully relocalised name sequence prefixed simple identifier indicating subset belongs flat house priory lab basic information sequences subset frame counts capture times found table illustrations sequences fit together make combined scenes shown figures sequence captured using asus zenfone augmented reality smartphone produces depth images resolution colour images resolution improve speed able load sequences disk resized colour images size produce collaborative reconstructions show paper nevertheless provide original resized images part dataset also provide calibration parameters depth colour sensors poses sensors frame optimised global pose produced sequence running approach sequences subset finally provide mesh sequence optimised global pose allow sequences subset loaded meshlab cloudcompare common coordinate system additional experiments section describe additional experiments performed evaluate method evaluate extent approach incrementally clustering relative transform samples different pairs able remove outliers find consistent relocalisations evaluate scalability approach interactive collaborative reconstruction finally evaluate impact frame compression final reconstruction quality able achieve server show compression enabled maintain mapping whilst discarding fewer frames allowing reconstruct higherquality models sequence frame count capture time table sequences subset dataset effectiveness relative transform clustering described main paper incrementally cluster relative transform samples add pair prior performing pose graph optimisation suppress effect outliers final result achieved checking new relative transform sample see existing cluster added specify possible iff within existing relative transform cluster add sample first cluster find create new cluster evaluate effectiveness approach took pairs sequences used evaluate depth difference verifier main paper relocalised every frame scene using relocaliser scene counted number relative transform samples added process examined clusters collected particular compared size largest cluster case largest correct cluster size largest cluster whose blended transform obtained blending relative transforms cluster using dual quaternion blending within blended transform correct cluster refer latter cluster largest incorrect cluster difference two sizes gave measure safety margin approach case figure collaborative reconstruction flat sequences dataset collectively represent flat images show individual reconstruct sequence images show combined map figure collaborative reconstruction house sequences dataset collectively represent house monocolour images show individual reconstruct sequence images show combined map figure collaborative reconstruction priory sequences dataset collectively represent house images show individual reconstruct sequence images show combined map number consistent erroneous samples would need added largest incorrect cluster cause chosen instead correct cluster results table show pairs scenes size correct cluster significantly larger size largest incorrect cluster indicating practice likely accumulate samples correct cluster become confident long accumulate samples incorrect cluster two pairs scenes safety margins much lower cases however cases pairs scenes question comparatively low overlap see green yellow sequences figures moreover whilst blended transforms correct cluster largest incorrect cluster case within manual inspection relevant transforms showed still comparatively close within meaning safety margin hitting cluster grossly incorrect blended transform practice somewhat higher case scalability interactive mode evaluate scalability approach interactive mode time collaborative reconstruction involving server three different clients desktop using kinect camera relocaliser based random ferns connected server wired network laptop using orbbec astra camera regression forest relocaliser connected server wifi iii local process machine server reads posed frames disk frame processing times server client course experiment together detailed description happening stage process shown figure might expected time taken per frame client unaffected number agents connected server allowing local reconstruction remain interactive even many clients connected moreover time taken per frame server increases small amount additional agent added allowing server user continue view collaborative reconstruction interactively different angles even multiple different clients connected large number clients connected figure collaborative reconstruction lab sequences dataset collectively represent research lab images show individual subscenes reconstruct sequence images show combined map scene scene total frames samples added correct cluster largest incorrect cluster safety margin table evaluating extent approach incrementally clustering relative transform samples different pairs able remove outliers find consistent relocalisations attempt relocalise every frame using record total number samples added equal number relocalised frames passed depth difference verification sizes correct cluster largest incorrect cluster produced method case together percentages samples added sizes represent safety margin scene pair refers number consistent erroneous samples would need added largest incorrect cluster cause chosen instead correct cluster server currently becomes less interactive mapping components client run sequentially something plan mitigate future running mapping components parallel frame compression demonstrate impact compressing frames transmit network final reconstruction quality server used cloudcompare compare reference model office two reconstructions performed frame server allocates scene collaboration starts continues client relocalises figure experiment showing frame processing times server three individual clients change course collaborative reconstruction using system note smooth times window make graph slightly easier read prior start experiment server allocates mapping component first client waits connect experiment starts first client desktop using kinect camera relocaliser based random ferns connects server wired network reconstruction begins client starts sending frames server subsequently second client laptop using orbbec astra camera regression forest relocaliser connects server wifi causes server allocate mapping component second client start reconstructing local scene based frames sends across parallel server starts trying relocalise local scenes two clients separate thread around half way experiment tracking second client fails triggering local relocalisation causes spike processing time client towards end experiment third client time local one machine server reads posed frames disk connects server allocates mapping component processing time client lower need perform camera tracking point server trying relocalise three local scenes finally three clients disconnect reconstruction stops however relocalisation continues server consistent reconstruction achieved server continues rendering three local scenes allow user visualise happening data sent wifi connection one frame compression disabled enabled see figure measured bandwidth wifi connection roughly less roughly needed transmit sequence compressed form without loss wired connection much less roughly would needed transmit sequence without loss uncompressed words compression enabled able transmit around frames sequence wifi whilst maintaining rates without compression drops figure shows significant effect resulting reconstruction quality compression disabled lose parts map completely higher error rate across map whole compression enabled manage reconstruct less entire map error rate greatly reduced additional implementation details section describe implementation details might relevant anyone wanting reimplement approach namely inner workings pooled queue data structure mentioned main paper way render global map reconstructed details skipped casual readers reference model reference uncompressed reference compressed figure example showing impact frame compression reconstruction quality able achieve server whilst maintaining frame rates discarding frames reference model office differences reference model model reconstructed frames managed transmit without using compression differences reference model model reconstructed frames managed transmit compression comparisons made using cloudcompare errors range blue red missing compression enabled forced discard far fewer frames allowing achieve much lower error rate respect reference model pooled queue data structure pooled queue data structure pairs normal queue objects type pool reusable objects underlying goal minimising memory reallocations performance reasons normal queues conventionally support following range operations empty checks whether queue empty peek gets object front queue pop removes object front queue push adds object back queue size gets number objects queue pooled queues support range operations implementations push pop necessarily complicated interactions pool pop operation straightforward two simply removes object front queue returns pool appropriate synchronisation context push operation complicated divide two parts begin push end push begin push operation first checks whether reusable object currently available pool removes pool returns push handler encapsulating object caller caller modifies object push handler calls end push actually push object onto queue reusable object available pool begin push called range options available policies implementation discard object trying push onto queue grow pool allocating new object reused iii remove random element queue return pool reused wait another thread pop object queue return pool practice found discard strategy work best approach growing pool disadvantage memory usage grow without bound time waiting makes client server case may less interactive removing random element queue functions much like discarding different frames global map rendering render global map reconstructed global pose first compute agent corresponding local pose local coordinate system using standard raycasting approach scene render synthetic colour depth images resolution agent local pose computed call respectively finally set colour pixel output image based depth testing agents argmin ranges domain references bajpai burroughes shaukat gao planetary monocular simultaneous localization mapping jfr cavallari golodetz lord valentin stefano torr adaptation regression forests online camera relocalisation cvpr chebrolu martinet collaborative visual slam framework system ppniv cheein carelli agricultural robotics unmanned robotic service units agricultural tasks ieee industrial electronics magazine chen zhong lou based map fusion distributed robot system robio choudhary carlone nieto rogers christensen dellaert distributed mapping privacy communication constraints lightweight algorithms models arxiv preprint cieslewski choudhary scaramuzza dataefficient decentralized visual slam arxiv preprint cieslewski lynen dymczyk magnenat siegwart map api scalable decentralized map building robots icra pages cignoni callieri corsini dellepiane ganovelli ranzuglia meshlab mesh processing tool eurographics italian chapter conference pages cloudcompare gpl software retrieved http cunningham indelman dellaert consistent distributed smoothing mapping icra pages cunningham paluri dellaert fully distributed slam using constrained factor graphs iros pages cunningham wurm burgard dellaert fully distributed scalable smoothing mapping robust data association icra pages dai izadi theobalt bundlefusion globally consistent reconstruction using online surface tog engel cremers largescale direct monocular slam eccv pages fankhauser bloesch diethelm wermelinger schneider dymczyk hutter siegwart collaborative navigation flying walking robots iros pages fioraio taylor fitzgibbon stefano izadi surface reconstruction using online subvolume registration cvpr pages fleck arth pirchheim schmalstieg tracking mapping swarm heterogeneous clients ismar forster lynen kneip scaramuzza collaborative monocular slam multiple micro aerial vehicles iros pages glocker shotton criminisi izadi realtime camera relocalization via randomized ferns keyframe encoding tvcg may golodetz sapienza valentin vineet cheng arnab prisacariu ren murray izadi torr semanticpaint framework interactive segmentation scenes technical report department engineering science university oxford october released arxiv golodetz sapienza valentin vineet cheng prisacariu ren arnab hicks murray izadi torr semanticpaint interactive segmentation learning worlds acm siggraph emerging technologies page huang dai guibas niessner towards commodity scanning content creation tog prisacariu murray dense reconstruction loop closure eccv pages kavan collins sullivan zara dual quaternions rigid transformation blending technical report trinity college dublin klein murray parallel tracking mapping small workspaces ismar pages grisetti strasdat konolige burgard general framework graph optimization icra pages lorensen cline marching cubes high resolution surface construction algorithm acm siggraph computer graphics mcdonald kaess cadena neira leonard visual slam environments ras michael shen mohta mulgaonkar kumar nagatani okada kiribayashi otake yoshida ohno takeuchi tadokoro collaborative mapping building via ground aerial robots jfr mohanarajah usenko singh andrea waibel collaborative mapping realtime robots tase murali speciale oswald pollefeys indoor building information models house interiors iros newcombe izadi hilliges molyneaux kim davison kohli shotton hodges fitzgibbon kinectfusion dense surface mapping tracking ismar pages izadi stamminger reconstruction scale using voxel hashing tog paull huang seto leonard cooperative slam icra pages prisacariu cheng ren valentin torr reid murray framework volumetric integration depth images arxiv preprint prisacariu golodetz sapienza cavallari torr murray infinitam framework reconstruction loop closure arxiv preprint reid cann meiklejohn poli boeing braunl cooperative navigation exploration mapping object detection ros riazuelo civera montiel tam cloud framework cooperative tracking mapping ras rone mapping localization motion planning mobile systems robotica saeedi trentini seto simultaneous localization mapping review jfr schmuck chli collaborative monocular slam icra pages shotton glocker zach izadi criminisi fitzgibbon scene coordinate regression forests camera relocalization images cvpr pages silveira guth ballester machado codevilla botelho opensource solution underwater slam ifacpapersonline valentin shotton fitzgibbon izadi torr exploiting uncertainty regression forests accurate camera relocalization cvpr pages whelan kaess johannsson fallon leonard mcdonald large scale dense slam volumetric fusion ijrr whelan leutenegger glocker davison elasticfusion dense slam without pose graph rss siegl vetter dreyer stamminger aybek bauer reconstruction excavation sites jocch
1
exterior symmetric powers modules cyclic frank himstedt peter symonds oct abstract prove recursive formula exterior symmetric powers modules cyclic makes computation straightforward previously complete description known cyclic groups prime order introduction aim paper provide recursive procedure calculating exterior symmetric powers modular representation cyclic let cyclic group order field characteristic recall indecomposable dim theorem integer chosen sides dimension syzygy heller operator group action factors exterior powers modules computed applying formula smaller group particular one determine exterior powers right hand side formula way also show simple recursive procedure calculating tensor products since obtain complete recursive procedure calculating exterior powers possible modules sufficiently efficient easy calculate even hand far beyond range previously attainable machine computation symmetric powers use following result theorem mod symbol means direct summands induced subgroups thus knowledge exterior powers determines symmetric powers induced summands fact shown formula determines symmetric powers completely using recursive procedure project supported deutsche forschungsgemeinschaft project invariantentheorie endlicher und algebraischer gruppen exterior symmetric powers modules cyclic formulas exterior symmetric powers module cyclic group prime order given almkvist fossum renaud extended cyclic hughes kemper provided power formula case cyclic given gow laffey also kouwenhoven obtained important results exterior powers modules cyclic pgroups including recursion formulas power formulas special cases direct consequences theorem obtain independent proofs results strategy consider quotient ideal generated squares elements turns need consider intermediate ring quotient squares elements show resolved koszul complex squares elements basis show koszul complex separated sense image boundary map contained projective submodule leads formula symbol means projective summands using theorem right hand side easily seen equal right hand side formula theorem modulo induced summands yields formula theorem modulo induced summands strengthening equality modulo projective summands formal inductive argument would like thank dikran karagueuzian calculations helpful discovering formula theorem koszul complexes let finite group subgroup field characteristic tensor products otherwise specified recall general facts chain complexes section definition definition chain complex called acyclic negative degrees homology degree weakly induced module induced weakly induced except degrees induced separated factors projective separated separated write inclusion factors projective factors injective hull call injective equivalent projective modular representations injective since socle thus write lemma lemma chain complexes separated total tensor product similarly product finitely many chain complexes exterior symmetric powers modules cyclic proof let projective module similarly summing degrees short exact sequence first two terms projective hence third need consider complexes details construction see lemma suppose every elementary abelian conjugate subgroup let complex separated complex also separated proof proof lemma image contained proof shows module projective restriction projective chouinard theorem corollary next two results comprise variation proposition proof proposition let arbitrary subgroup suppose complex acyclic weakly induced except one degree separated restriction separated recall heller translate defined kernel projective cover denotes iterated times similarly cokernel injective hull iteration let denote projective summands removed properties induced lemma suppose complex acyclic separated let space submodule rof write symmetric algebra exterior algebra let denote module definition let submodule let denote kgsubmodule spanned powers elements let denote koszul complex graded wip wjp write consider complex exterior symmetric powers modules cyclic squaring map gives isomorphism regard copy degree equipped squaring map point view boundary map given normally take second point view assume large part paper since written significant restriction lemma context definition complex acyclic homology degree ideal generated elements proof basis wrp regular sequence spans standard result koszul complexes lemma let spaces let submodules respectively complex isomorphic total tensor product complex graded proof analogous lemma also need deal tensor induction graded modules complexes lemma let subgroup let graded characteristic complexes graded without restriction characteristic would deal sign convention appears definition action complex proof let set coset representatives write formulas follow usual formulas sum definition group action tensor induced module modules cyclic let hgi cyclic group order field characteristic write green ring isomorphism indecomposable choose notation dimk convenience write module generator acts matrix jordan block ones diagonal choose gxi element written uniquely polynomial identify spanned uniserial composition series note kernel nontrivial identified indecomposable module quotient group decompositions tensor products indecomposables studied several authors see example case decomposition easily computed using heller translate write instead want exterior symmetric powers modules cyclic emphasize working modules group easy check proj means modulo projective modules recall projective part determined comparing dimensions cyclic groups provides easy recursive method calculating decomposition tensor products case cyclic order calculate may assume write smallest possible modulo copies efficient write example copies comparing dimensions get consider nonfaithful module module factor group get copies comparing dimensions obtain hence let unique maximal subgroup also denote indecomposable dimension course abuse notation always make clear whether consider elementary calculation jordan canonical forms shows restriction operator given generated generated particular odd even induction operator given say induced induced proper subgroups let submodule generated projective modules submodule generated induced modules notice ideals induction maps restriction maps following lemmas deduce information short exact sequences restriction lemma let induced proper subgroup induced proof assume indecomposable since induced indecomposable direct summand even dimension thus dim even induced introduction write isomorphisms modulo induced modulo projective summands respectively lemma let induced proof since induced true inducing projective yields projective obtain claim follows lemma let proof induced modules restriction obtain lemma cancel summands original formula exterior symmetric powers modules cyclic lemma let short exact sequence separated restriction sequence separated sequence proof hypotheses imply induced modules lemma also proj proj applied follows hence lemma thus short exact sequence projective summands consider long exact sequence tate ext homkg homkg homkg ext homkg denotes homomorphisms modulo factorize projective since homkg dim dim dim homkg dim homkg dim homkg dim homkg dim therefore dim dim homkg hence injective ida factors projective required next lemma describes tensor induction modulo induced modules gives information structure exterior algebra terms lemma let integers consider let induced induced odd induced even induced odd ind proof follows iii construction induced modules gvj vector spaces action generator given natural isomorphism gvj gvj vector spaces thus gvj gvj exterior symmetric powers modules cyclic via isomorphism right hand side becomes action see gvj gvj isomorphic gvj submodule isomorphic proof similar note odd summand corresponding leads tensor induced submodule occur say induced except possibly one trivial summand isomorphic prove simultaneously showing induced except possibly one trivial summand claim follows fact dimk even even proof induction assume even induced proper subgroup implies direct sum modules induced even proper subgroups assume odd write first treat case mackey formula tensor induction induced one trivial summand assume get induction case know left hand side induced except possibly one trivial summand hence induced except possibly one trivial summand see symmetric exterior powers even dimensional indecomposable modules particularly restricted form corollary suppose integers odd furthermore assume induced unless induced modules numbers indecomposable summands respectively proof using lemma see induced direct summands subgroup index induced direct summands thus induced part lemma induced direct summands description given seen valid using parts case reduces theorem corollary every direct summand dimension divisible induced proof identity may assume indecomposable say claim follows corollary exterior symmetric powers modules cyclic proof main result often information modulo induced direct summands following definition lemmas deal splitting maps situations recall map split injective map ida maps write definition let map say split injective modulo induced summands exists induced map split injective split injective modulo induced summands behaves much way split injective lemma given maps split injective modulo induced summands split injective modulo induced summands split injective modulo induced summands proof assumption induced modules maps ida idb define ida parts proved similar way proofs left reader lemma let map write summands induced summands let denote inclusion split injective modulo induced summands split injective proof suppose split injective modulo induced summands want show split injective lemma map split injective modulo induced summands assume show split injective since split injective modulo induced summands induced module maps ida since summands common know lies radical endkg note indecomposable write elements endkg matrices entries homkg radical consists morphisms component isomorphism thus surjective hence automorphism split injective conversely suppose split injective map let denote inclusion projection onto define ida split injective modulo induced summands remark proof shows induced module definition always chosen way contains indecomposable direct summands also occur remark definition makes sense finite group class indecomposable modules lemmas remain true exterior symmetric powers modules cyclic turn certain symmetric exterior powers modules cyclic contained green ring spanned indecomposable modules satisfying mod describe properties lemma submodule subring closed proof part clear definitions part need show mod suppose remarks computation tensor products beginning section mod integers mod consider modules claim follows induction part main theorem assume field elements cyclic group order know lemma acyclic homology degree turn closely related exterior algebra natural study structure graded ring integer write use similar notation denote kernel natural epimorphism graded modules let choose section simplicity write xtop element written uniquely polynomial xtop set xtop let let image image element still invariant txtop latter case invariant cases homogeneous degree degree considered polynomial xtop elements invariant action image also write image element xtop next theorem main result since representation field characteristic written part implies theorem record parts since also interest form integral part proof theorem let integers separation complex separated graded periodicity keb cases isomorphism right left induced product exterior symmetric powers modules cyclic splitting short exact sequence graded induced split exterior powers following isomorphism case little unnatural need induction restriction sometimes succinct consider hilbert series coefficients green ring possibly modulo projectives induced modules details see particular consider following series associated last requires specified order determined naturally considered modulo projectives commute restriction turn direct sums modules products series easy consequence corresponding properties corresponding functors modules except perhaps need formula many statements modules imply hilbert series versions theorem separation splitting periodicity exterior powers symbols mean consider coefficients modulo induced projective direct summands respectively first last identities fact equivalent original versions second identity follows theorem lemma lemma theorem proved remark easy calculation shows fixed last formulas follows formally first three satisfied remark proof theorem actually gives precise formula first one works showing complex defined separated applying lemma note definition different definition result since given basis permuted monomial basis permuted small decomposition calculated hand general calculation organized using proposition alternatively proposition applied directly next six sections devoted proof theorem induction exterior symmetric powers modules cyclic case section start inductive proof theorem suppose prove statements theorem assumptions parts theorem easily verified direct calculation fact one obtains isomorphisms modulo induced summands one gets separation trivial let consider part show short exact sequence ser separated odd induced theorem hence projective separation obviously true separation trivial even direct calculation separation theorem show ser follows lemma sections comprise inductive step proof theorem sections always assume integer theorem holds smaller values throughout sections notation remains sections thus hgi cyclic group order field two elements integer periodicity section prove part theorem assuming parts theorem hold smaller values let unique maximal subgroup let xtop section choose elements section let spanned monomials xtop divisible see lemma notice periodicity theorem equivalent induced fact know something stronger corollary namely induced obtaining make construction define subcomplex defined using instead boundary morphisms definition done since xtop used definition contained thus complex graded exact except degree homology isomorphic notice construction complexes isomorphic particular note later use one separated fix abbreviate notation etc exterior symmetric powers modules cyclic suppose claim lri induced may assume induced thus complex induced consider restriction complex subgroup decomposes tensor product two complexes lemma separated induction theorem hence product lemma follows complex separated restriction seen complex induced modules thus complex separated proposition lemma shows induced exactly periodic suppose argument see keb induced complete proof theorem show set permuted basis given set consisting write yei image monomials degree yei occur power group permutes monomials forms straightforward check two invariant monomials namely rest span induced submodules completes proof periodicity splitting section prove part theorem assuming whole theorem smaller let unique maximal subgroup xtop kernel natural surjection section theorem write following proposition deals structure degrees less proposition integer short exact sequence induced graded split starting proof proposition introduce notation described beginning section xtop write respectively odd even provide natural embeddings isomorphism given exterior symmetric powers modules cyclic according description preceding choose rem working thus homogeneous degree degree considered polynomial xtop furthermore invariant action image similarly choose homogeneous degree degree considered polynomial invariant action image induction periodicity generators respectively variant lemma let integer let periodicity generator generated space images wherelthe monomials degree strictly less considered polynomial xtop degree strictly less considered polynomial proof give monomials lexicographic order xtop let leading term since write degree considered polynomial leading term involves monomials xtop find thus comparing dimensions see sum sum direct lemma follows ready prove proposition proof proposition study restriction sequence maximal subgroup lemma middle term owing choice therefore fact construction kge induced consider exact sequence submodule ser know lemma sequence split restricted since induced relatively sequence splits see theorem exterior symmetric powers modules cyclic thus direct summand follows since induction ignoring grading write graded restricting sequence obtain sequence split induction thus induction obtain equations imply follows induced proper subgroups lemma induced seen induced hence relatively sequence split restriction since sequence must split completes proof proposition exterior powers following corollary provides connection degrees less corollary integers map induces isomorphism modulo induced summands ser proof clear proposition section prove theorem induced part theorem follows part keb induced note maps generator part theorem consequence preparation separation section prepare proof part theorem assuming whole theorem smaller let unique maximal subgroup let xtop section main goal section develop useful criteria complex separated lemma let integers suppose mod complex separated separated kir true replaced section exterior symmetric powers modules cyclic proof demonstrate proof proof analogous write short fix consider boundary morphism show factors projective since separated inclusion factors projective write inclusion composition inclusions last map factors projective lemma let integers suppose complex separated following statements equivalent separated natural map split injective modulo induced summands proof write ser ser conditions lemma show separated except perhaps restriction complex decomposes tensor product two complexes lemma separated continuing induction hypothesis hence product lemma separated restriction thus short exact sequence ser separated restriction maps confused indices part lemma separation positive degrees lemma yield formula theorem shows let ser natural surjection proposition map split injective modulo induced summands lemma enough show split injective modulo induced summands assumption submodules projective ker let projection onto natural embedding define ser note restriction injective ids split injective modulo induced summands assume factorization lemma imply also split injective modulo induced summands write summands induced lemma restriction split injective maps injectively ser direct summand ser factoring obtain short exact sequence ser exterior symmetric powers modules cyclic seen beginning proof factors projective restriction true thus complex separated restriction induced complex separated proposition lemma yields ser using obtain ser theorem implies adding summand sides using corollary gives formula assume holds corollary theorem get ser separation follows applying lemma short exact sequence separation trivial prove notice map factor projective must map must factor projective cover kernel certainly mapped lemma integer complex separated proof complex question induction map factors projective restriction write summands induced summands component factors projective proposition claim component must theorem know module follows contains summands dimension let summand suppose component must factor projective restriction map discussion none components factor projective module unless readily prove separation even lemma even integer complex separated proof write lemma know right hand side separated lemma induction hypothesis view lemma assume odd lemma let odd integer given induction hypothesis subgroup green ring lemma exterior symmetric powers modules cyclic proof part dimension module range know formula exterior powers see theorem valid continuing induction hypothesis statement clearly true employ induction using formula properties lemma part use formula remark end section part summands permutation modules monomial basis unless stabilizer monomial index first happens degree monomial fixed subgroup order contains must also contain elements orbit separation first make general constructions related symmetric exterior powers vector spaces convenient integrally first reduce modulo let free module integers localized set times let symmetric group act permuting factors factoring action get also let act permuting factors multiplying signature permutation case write similarly factoring action obtain subset set subgroup isomorphic write consider subgroup involution mapping importance lies fact index odd seen folp lows equal coefficient mod mod also define natural quotient maps sections trs lrs lrs given property trs ids maps natural transformations functors free writing see description lrs similarly involution signature provided exterior symmetric powers modules cyclic let space let free let denote one functors use define functor name spaces gives expected result order verify really functor vector spaces notice two free natural map homz surjective maps vector spaces lift furthermore map kernel image factors multiplication multiplication induces multiplication thus induces follows formulas also valid spaces difference natural transformations lre lrs induced reducing modulo squares functors induce functors modules group obvious way remark representation field characteristic written sufficient purposes really needed functors vector spaces bigger field could achieved starting larger ring rest section prove part theorem assuming whole theorem smaller use notation lemma suppose lre lrs split injective modulo induced summands split injective modulo induced summands proof consider commutative diagram trs lrs map trs split injective lre split injective modulo induced summands lre trs lemma equal lemma shows split injective modulo induced summands lemma odd integer integer split injective modulo induced summands proof use induction cases trivial covered lemma combined lemma let write abbreviate lemma sufficient check lre lrs split injective modulo induced summands lre split injective modulo induced summands induction lemma lre induction split injective modulo induced summands extends split injective map left inverse induced remark lemma may assume contains summands also summands lemma exterior symmetric powers modules cyclic assumption hence since dimension divisible applying see extends left inverse certainly induced induced corollary thus lre thus split injective modulo induced summands let odd integer follows lemmas complex separated recall separated complex section separated rest section write etc separated coincides range show separated induction let assume complex separated lower degrees lemma also assume separated positive degrees enough prove short exact sequence ter separated lemma restriction maximal subgroup decomposes tensor product two complexes separated continuing induction hypothesis theorem product also separated lemma hence follows sequence separated restriction induced range separation follows immediately proposition applied proves complex separated part theorem follows exterior powers section prove part theorem assuming whole theorem smaller already proved separation periodicity splitting know see first remark end section order obtain formula first consider restriction subgroup index writing two sides formula become know induction similarly thus restriction two sides equal modulo projectives use lemma order see two sides equal modulo projectives even restriction finally completes proof theorem exterior symmetric powers modules cyclic bound number summands description tensor product given section shows decomposition indecomposable summands involves summand odd dimension odd case contains precisely one summand let write summ number indecomposable summands module proposition number summands dim dim proof let denote number summands comment tensor product shows proposed bound also turns sums products suffices consider case indecomposable show use induction since cases trivial assume write setting formula using induction obtain indecomposable assume group acts faithfully dimension direct summand follows dimension part divided dimension whole remarks already mentioned introduction formula theorem reduces computation computation tensor products exterior powers modules smaller dimension since tensor products easily determined recursively see section gives efficient recursive method calculating decomposition exterior powers modules cyclic indecomposables program based recurrence relation implemented gap first author restriction use theorem growth multiplicities direct summands form example multiplicity direct summand one interested part recurrence relation applied modulo induced summands keep multiplicities relatively small together results recurrence relation theorem also provides algorithm computing decomposition symmetric powers indecomposables arbitrary exterior symmetric powers modules cyclic example determine decomposition indecomposables furthermore duality thus comparing dimensions obtain obviously gow laffey formula exterior squares theorem special case theorem furthermore setting theorem gives kouwenhoven formula theorem power theorem kouwenhoven proved formula power prime see section definition show derived theorem case note since dimension series two sides match sufficient prove modulo projectives theorem gives modulo modulo latter written exactly last term written inside parentheses since invertible applying odd degrees obtain substituting left hand side yields modulo easy verify modulo theorem also used calculate adams operations green ring shown roger bryant marianne johnson define element log exterior symmetric powers modules cyclic extension map called rth adams operation defined exterior powers shown odd identity map remains describe see details write seen applying definition adams operations hilbert series form theorem obtaining obvious notation references almkvist fossum decompositions exterior symmetric powers indecomposable characteristic relations invariants dubreil lecture notes math berlin heidelberg new york barry decomposing tensor products exterior symmetric squares group theory barry generators decompositions tensor products modules arch math benson representations cohomology cambridge studies advanced mathematics cambridge univ press cambridge bryant johnson adams operations green ring cyclic group order algebra bryant johnson periodicity adams operations green ring finite group pure appl algebra chouinard projectivity relative projectivity group rings pure applied algebra curtis reiner methods representation theory wiley new york green modular representation algebra finite group illinois math gap group gap groups algorithms programming version http gow laffey decomposition exterior square indecomposable module cyclic group theory himstedt symonds equivariant hilbert series algebra number theory hou elementary divisors tensor products binomial matrices linear algebra appl hughes kemper symmetric powers modular representations hilbert series degree bounds comm algebra kouwenhoven green rings cyclic proc symposia pure norman jordan form tensor product fields prime characteristic linear multilinear algebra norman jordan bases tensor product kronecker sum elementary divisors fields prime characteristic linear multilinear algebra renaud decomposition products modular representation ring cyclic group prime power order algebra renaud recurrence relations modular representation algebra bull austral math soc exterior symmetric powers modules cyclic srinivasan modular representation ring cyclic proc london math soc symonds cyclic group actions polynomial rings bulletin london math soc technische zentrum mathematik boltzmannstr garching germany address himstedt school mathematics university manchester manchester united kingdom address
0
identification functionally related enzymes methods may michiel stock thomas fober eyke serghei glinca gerhard klebe tapio pahikkala antti airola bernard baets willem waegeman abstract enzyme sequences structures routinely used biological sciences queries search functionally related enzymes online databases end one usually departs notion similarity comparing two enzymes looking correspondences sequences structures surfaces given query search operation results ranking enzymes database similar dissimilar enzymes information biological function annotated database enzymes ignored work show rankings kind substantially improved applying learning algorithms approach enables detection statistical dependencies similarities active cleft biological function annotated enzymes contrast approaches take annotated training data account similarity measures based active cleft known outperform measures certain conditions consider enzyme commission classification hierarchy obtaining annotated enzymes training phase results set sizeable experiments indicate consistent significant improvement set similarity measures exploit information small cavities surface enzymes introduction modern technologies molecular biology generating protein sequences tertiary structures small fraction ever experimentally annotated functionality predicting biological function remains extremely challenging especially novel functions hard detect despite large number automated annotation methods introduced last decade existing online services blast relibase often provide tools search databases contain collections annotated enzymes systems rely notion similarity searching related enzymes definition similarity differs system system indeed vast number measures expressing similarity two enzymes exists literature performing calculations different levels abstraction one make major subdivision measures approaches solely use sequence amino acids approaches also take account tertiary structure approaches consider local fold information analyzing small cavities hypothetical binding sites surface enzyme measures blast computed efficient manner able find enzymes related functions certain conditions addition several kernelbased methods developed make predictions proteins sequence level see high sequence similarity usually results high structural similarity proteins sequence identity number matches alignment generally considered share structure however assumption becomes less reliable twilight zone sequence identity situated furthermore enzymes comparable functions exhibit sequences low sequence identity reasons crystal structures becoming available online databases comparison proteins structural level gained increasing attention secondary stock baets waegeman department mathematical modelling statistics bioinformatics ghent university coupure links ghent belgium email fober marburg department mathematics computer science marburg germany glinca klebe also marburg department pharmacy marbacher weg marburg germany pahikkala airola department information technology turku centre computer science university turku joukahaisenkatu turku finland enzymes biomolecules catalyze chemical reactions enzymes consider work proteins vice versa notions used interchangeably structure enzyme known highly influence biological function contains valuable information missing sequence level many approaches perform calculations overall fold protein developed see unfortunately approaches also optimal determining function enzymes require knowledge active site residues usually lead quite coarse representation especially enzymes often specific residues responsible catalytic mechanism example superfamily shows large functional diversity limited sequence diversity also shown parts protein structure space high functional diversity limiting use global fold similarity reasons many methods consider local structural features evolutionary conserved residues appropriate similarity measures prediction enzyme functions focus surface regions ligands substrates bind cavities surface known contain valuable information exploiting similarities cavities helps finding functionally related enzymes considering structural information binding sites one detect relationships found using traditional methods making similarities particular interest applications drug discovery addition providing complementary notion protein families methods also allow extracting relationships cavities unrelated proteins similarity measures highlight cavities binding sites subdivided approaches geometric approaches approaches measures discussed thoroughly section paper aims show search functionally related enzymes substantially improved applying methods algorithms use training data build mathematical model ranking objects enzymes necessarily seen among training data methods applied types data long meaningful similarity measure constructed demonstrate power using measures reasons explained machine learning algorithms often used applications information retrieval due proven added value search engines machine learning methods gained popularity bioinformatics example drug discovery find similarities proteins despite many online services blast pdb dali cavbase solely rely similarity measures construct rankings without utilizing annotated enzymes learning algorithms steer search process training phase however due presence annotated enzymes online databases improvements made applying machine learning algorithms amounts transition unsupervised supervised learning scenario using four different similarity measures one based sequence alignment input rankrls ranking algorithm demonstrate significant improvement measures rankrls works similar way competitors ranksvm uses annotated training data learn rankings training phase training data annotated via enzyme commission functional classification hierarchy commonly used way subdivide enzymes functional classes numbers adopt hierarchical structure representing different levels catalytic detail importantly representation focuses chemical reactions performed structure homology explained elaborately section numbers used construct catalytic similarity measure subsequently generate rankings addition obtaining annotated training data procedure also allows fair comparison traditional approach using conventional performance measures rankings way evaluating also characterizes difference search engine approach previous work supervised learning algorithms number assignment considered far complete list see work unable compare methods return rankings output nonetheless similar approaches take hierarchical structure numbers account instead predicting one number ranking functionally related enzymes returned given query scheme top obtained ranking expected contain enzymes functions similar query enzyme unknown number ranking provides end users generally easily understandable output still useful results retrieved enzyme new number encountered material methods database work builds upon cavbase database made commercially available part relibase cavbase used automated detection extraction storage protein cavities experimentally determined protein structures available protein data bank pdb geometrical arrangement pocket properties first represented predefined pseudocenters spatial points characterize geometric center functional group specified particular property type spatial position pseudocenters depend amino acids border binding pocket expose functional groups derived protein structure using set predefined rules donor acceptor mixed hydrophobic aliphatic metal ion accounts ability form interactions aromatic properties considered possible types pseudocenters pseudocenters regarded compressed representation surface areas certain interactions encountered consequently set pseudocenters approximate representation spatial distribution properties build test models require appropriate data set contains sufficiently many proteins classes based experience local pharmaceutical experts chose data set classes depicted table generate first data set data set retrieved proteins pdb got assigned one classes thus ended set proteins ensure unique proteins contained data set used protein culling default parameterization proteins high pairwise homology filtered procedure resulted data set cardinality extract active site protein used assumption largest binding site protein contain catalytic center hence protein took binding site database cavbase maximized volume data set proteins contained cavbase structure determined nmr instead therefore proteins removed data set resulting final data set size first data set comes two drawbacks first binding site containing catalytic centre determined pure heuristic namely taking largest binding site among binding sites protein exhibits moreover sufficient resolution criterium selecting cavities may lead data set low quality therefore relying expertise pharmaceutical experts compiled another data set referred data set containing classes data set proteins pdb resolution least considered moreover binding site volume required range structures meeting conditions eliminated since resolutions usually lead coarse representation binding sites volumes outside range usually artefacts produced algorithm used detection resulting set proteins active site selected resulted data set enzymes applied protein culling server finally end second data set enzymes pairwise sequence similarity matrix phylogenetic tree data sets found supplementary materials similarity measures cavities introduction motivated analysis restricted similarity measures cavities objects represented multiple ways measures transforming cavities graphs allows apply traditional techniques compare graphs unfortunately techniques construct boolean similarity measure based graph isomorphisms appropriate comparing noisy flexible protein structures computing maximum common subgraph considered appropriate alternative method used paper baseline see graph edit distance another measure compare graphs specifying number edit operations needed transform given graph another graph distance calculated different ways using greedy heuristic quadratic programming unfortunately graph edit distance hard parameterize often quite inefficient efficient approaches belong class graph kernels gained lot attention bioinformatics allow sufficiently high degree error tolerance different realizations available shortest path kernel random walk kernel graphlet kernel graph kernels work particularly well small molecules ligands less useful larger molecules proteins gave rather poor results explains concentrated maximum common subgraph representative approaches second category measures cavities geometric methods directly process labeled spatial coordinates functional parts denoted point clouds instead transforming protein cavity graph remarkably approaches proposed build representation geometric hashing employed http table list numbers accepted name number examples class two data sets number accepted name alcohol dehydrogenase aldehyde reductase dihydrofolate reductase peroxidase camphor thymidylate synthase diphthine synthase phosphorylase transglycosylase enzyme kinase acetylcholinesterase trypsin thrombin aldolase carbonate dehydratase tryptophan synthase xylose isomerase steroid ligase set set calculate superposition protein cavities used derive alignment similarity score similar approach used optimization problem solved instead applying geometric hashing beside two approaches several methods exist comparing two point clouds unfortunately majority methods cope biological data due high complexity error intolerance third family approaches one also represent protein cavity feature vector taking geometry cavity properties account see subsequently traditional specialized measures applied vectors obtain similarity scores protein cavities experiments selected representative method three groups one measure one geometric measure one measure also considered original cavbase measure measure obtained protein sequence alignment lead comparison five different measures four based cavities one based sequence alignment measures explained detail labeled point cloud superposition lpcs value obtained processing labeled point clouds hence cavbase data used directly without need transforming another representation intuitively two labeled point clouds considered similar spatially superimposed specifically approximate superposition two structures obtained fixing first point cloud moving second point cloud whole two point clouds well superimposed point first cloud matched point second point cloud distances points small labels consistent concept used define fitness function maximized using direct search approach obtained maximal fitness taken similarity two labeled point clouds similar measure also proposed convolution kernel suggested obtain similarities point clouds maximum common subgraph mcs using mcs original representation form labeled point cloud must transformed graph pseudocenter becoming node labeled corresponding property capture geometry complete graph considered edge weighted euclidean distance two pseudocenters adjacent problem measuring similarity protein cavities boils problem measuring similarity graphs approach search maximum common subgraph two input graphs define similarity size maximum common subgraph relative size larger graph case noisy data threshold required defining two edges equal weight differs paper parameter set recommended several authors cavbase similarity cavbase also makes use algorithm detection common subgraphs instead considering largest common subgraph done case mcs largest common subgraphs considered common subgraph used determine transformation rule means kabsch algorithm superimposes proteins step surface points also superimposed according transformation rule similarity score derived using surface points eventually set similarity values obtained highest value returned similarity two protein cavities fingerprints fingerprints concept used successfully many domains comparison protein binding sites authors transformed protein binding site graph described moreover defined generically set features namely complete graphs size feature test performed decide whether feature contained graph representing protein done subgraph isomorphism checks whether labels identical nodes features labeled set physiochemical properties edges patterns labeled intervals bins instead testing equivalence test performed whether edge weight graph representing protein falls bin pattern thus generated fingerprints compared means jaccard similarity measure proposed beside using approaches compare protein binding sites used also sequence alignment experimental study calculate sequence alignments used smithwaterman algorithm parameterized matrix sequence alignment derived sequence identity subsequently used perform experiments unsupervised ranking introduction explained existing online services blast pdb dali cavbase construct rankings unsupervised way systems create ranking means similarity measure without training model uses annotated enzymes annotated enzymes database simply ranked according similarity enzyme query unknown function case cavbase enzymes high similarity appear top ranking exhibiting low similarity end bottom formally let represent similarity pair enzymes represents set potential enzymes given similarities compose ranking conditioned query indicates relation ranked higher query based similarity note relation two enzymes conditioned third enzyme context meaningful ranking possible enzymes without referring another enzyme approach adopts methodology nearest neighbor classifier ranking rather class label seen output algorithm quality rankings evaluated database contains annotated enzymes annotated queries evaluation phase compare obtained ranking ground truth ranking constructed numbers annotated enzymes ground truth ranking deduced catalytic similarity ground truth similarity query database enzymes counting number successive matches label query database enzymes thus catalytic similarity property pair enzymes contrast order create ground truth ranking two enzymes catalytic similarity calculated third enzyme example enzyme number catalytic similarity two compared enzyme labeled since enzymes belong family glycosyltransferases conversely enzyme manifests similarity value one enzyme labeled transferases case show relevant similarity chemistry reactions catalyzed formally let represent catalytic similarity two enzymes relation defined figure six enzyme structures shown five correspond known number catalytic similarity depicted edges graph algorithm present allows infer unannotated query denoted ranking annotated enzymes end unsupervised approach solely uses similarity measures whereas supervised approach also takes numbers annotated enzymes account zondag mei equals ith digit numbers otherwise figure gives example six enzymes five correspond known number catalytic similarity depicted edges graph proposed algorithm allows infer unannotated query ranking annotated enzymes algorithm may encountered among training data given similarities compose similar ground truth ranking conditioned query result entire ground truth ranking database enzymes known numbers constructed given annotated query enzyme supervised ranking contrast unsupervised ranking approaches supervised algorithms take ground truth information account training phase perform experiments conditional ranking algorithms using rankrls implementation let introduce notation denote couple consisting enzyme query database enzyme rankrls produces linear basis function model type denotes vector parameters implicit feature representation couple rankrls differs conventional methods optimizes convex differentiable approximation rank loss bipartite ranking area roc curve instead loss together standard regularization term parameter vector regularization parameter following loss minimized given training set denotes ground truth similarity defined set training couples ground truth information available subset containing results query outer sum takes queries account inner sum analyzes pairwise differences ranked results given query loss minimized computationally efficient manner using analytic shortcuts methods shown according representer theorem one rewrite following dual form kernel function four enzymes input weights dual space paper adopt kronecker product feature mapping containing information couples enzymes feature mapping individual enzyme kronecker product one easily show pairwise feature mapping yields kronecker product pairwise kernel dual representation traditional kernel enzymes specifying universal kernel leads universal kernel indicating one use kernel represent arbitrary relation provided learning algorithm access training data sufficient quality kernel introduced modelling interactions consider kernel universal approximation property also pairwise kernels exist cartesian pairwise kernel metric learning pairwise kernel transitive pairwise kernel nonetheless probably surprising kernels yield improvement concepts learned satisfy restrictions imposed kernels exception measure none similarity measures discussed section strictly speaking valid kernels using construction similarity measures converted kernels type made symmetric positive definite attributes guarantee numerically stable unique solution learning algorithm simply enforced symmetry averaging similarity matrix transpose subsequently made different similarity matrices positive definite performing eigenvalue decomposition setting eigenvalues smaller equal zero method leads negligible loss information compared numerical accuracy algorithms data storage finally kernel matrix normalized diagonal elements value equal one since procedures performed whole data set one arrives transductive learning setting minor adjustments would obtain traditional inductive learning setting note overfitting prevented applying procedure since numbers enzymes data set taken account since catalytic similarity symmetric measure also perform output algorithm matrix predicted values used ranking enzymes made symmetric averaging transpose performance measures ranking ranking obtained unsupervised supervised learning algorithms compared ground truth ranking applying performance measures commonly used information retrieval first ranking accuracy considered defined follows heaviside function returning one argument positive zero argument negative argument zero ranking accuracy considered generalization area roc curve two ordered classes interest ranking accuracy motivated two reasons firstly unlike performance measures consider levels hierarchy taken account determine performance different algorithms predicted rankings interpreted layered multipartite rankings see ranking accuracy preserves hierarchical structure counting pairwise comparisons second reason interest based fact ranking accuracy optimized rankrls software using convex differentiable approximation given loss function characterizes important difference traditional algorithms support vector machines resulting information retrieval setting instead traditional classification network inference setting since ranking accuracy generally known bioinformatics also evaluated algorithms using three conventional performance measures commonly considered bipartite rankings rankings containing relevant versus irrelevant objects three measures area roc curve auc mean average precision map normalized discounted cumulative gain ndcg auc map ground truth rankings converted bipartite rankings leading decrease granularity performance estimation chose threshold three ground truth similarity retrieved enzyme relevant enzyme query least first three parts number identical query experimental setup selected two data sets enzymes cavbase described section catalytic similarity enzyme pairs computed data set data set randomized split table summary results obtained unsupervised supervised ranking data sets combination similarity type performance measure performance averaged different folds queries standard deviation parentheses unsupervised supervised unsupervised supervised map auc ndcg map auc ndcg map auc ndcg map auc ndcg set mcs set mcs lpcs lpcs four folds equal size unsupervised case subset used individually allow comparison supervised model subset enzyme used query rank remaining enzymes described section performance rankings averaged obtain global performance folds supervised setting fold withheld test set three parts data set used training model selection process repeated part every instance used training testing thus outer neither query database enzymes thus used building model allows demonstrate methods generalize new enzymes addition inner loop implemented estimating optimal regularization parameter recommended value hyperparameter controls model complexity selected grid containing powers final model trained using whole training set median best hyperparameter values ten folds used implementation python train models results discussion differences similarities data sets table gives global summary results obtained unsupervised supervised ranking approach data sets one note sizeable difference performances different similarities data sets performance measures used despite variation clear data set considerably harder data set easily explained fact data set contains enzymes certain resolution active site furthermore set active site determined expert set active site resolved heuristically choosing largest cavity likely mistakes made annotation process consequently inferring functional similarity data set harder similarity measure based fingerprints usually results worst performance except ranking error unsupervised setting seems performance improve much supervised approach compared similarity measures likely fingerprints cause high loss information since even functionally dissimilar enzyme cavities considered similar according metric comparing two similarities mcs see differences data sets though perform relatively well mcs performs better data set clear champion data see http software set good performance cavbase data set explained easily cavbase computes largest common subgraphs could used construct similarity measure however graph representation leads loss information since coordinates pseudocenters restored moreover since size maximum common subgraph integer usually lies range nodes loss resolution mapping many different pairs cavities similarity score theory mcs suffers drawbacks even though resolution problem certain extent solved size maximum common subgraph divided size larger binding site graph representation could still lead slight loss information hand lpcs measure uses geometric information hence loss information introduced transforming pseudocenter representation graph representation moreover transformation cause resolution problem yet measure computed via solving multimodal optimization problem possible get stuck local optimum resulting similarity score low similar mcs lpcs seems perform relatively better data set compared data set probably local optimum becomes less issue former case explained fact data set contains larger cavities average hence making harder find global optimum finally consider measure based sequence alignment data set similarity measure competes mcs one best measures depending performance measure data set supervised case outperformed measure clear measure powerful method comparing cavities also limited bad resolution cleft like mcs seeks quantify largest similar region local alignment contains information common residues cavity simple though powerful measure benefits supervised ranking ranking data set data set showed considerable improvement supervised approach three important reasons put forward explain improvement performance section illustrate using data set one showed clear effects learning first traditional benefit supervised learning plays important role one expect supervised ranking methods outperform unsupervised methods take annotations account training phase guide model towards retrieval enzymes similar number conversely unsupervised methods solely rely meaningful similarity measure enzymes ignoring numbers second also advocate supervised ranking methods ability preserve hierarchical structure numbers predicted rankings figure supports claim summarizes values used ranking one fold test set obtained different models higher value indicated lighter color row means enzyme considered higher catalytic similarity query enzyme unsupervised ranking visualizes supervised ranking values shown row heatmap corresponds one query supervised models one notices much better correspondence ground truth furthermore different levels catalytic similarity better distinguished addition example distributions predicted values within one query visualized figure means box plots different populations within plot correspond different levels catalytic similarity query enzyme illustrates supervised models make better discrimination enzymes functionally similar dissimilar example query quartiles overlapping supervised model unlike unsupervised approach detects good ranking exact matches enzymes number identical query third reason improvement supervised ranking method found exploitation dependencies different catalytic similarity values roughly speaking one interested catalytic similarity enzymes one try compute catalytic similarity direct way based mutual relationships cavities derive indirect way similarity third enzyme division direct indirect approach shows certain correspondence similar discussions context inferring interaction signal transduction networks see unsupervised ranking boils certain sense direct approach supervised ranking interpreted indirect especially similarity matrix contains noisy values one expect indirect approach allows detecting back bone entries correcting noisy ones differences performance measures table indicates different performances degree influenced similarity measure data set used especially clear supervised ranking approach one observe clear distinction ground truth unsupervised supervised measures used ranking figure heatmaps values used ranking data set one fold testing phase row heatmap corresponds one query four figures top visualize similarities used construct unsupervised ranking four figures bottom visualize model output used derive supervised ranking ground truth catalytic similarity learned ranking accuracy area roc curve treat every position equally important two measures emphasize top ranking come surprise approximation ranking error optimized algorithms auc related coincide bipartite rankings latter make distinction relevant enzymes three numbers common enzymes since uses finer fragmentation functional similarity severe performance measure compared auc data set auc close theoretical optimum nearly similarity measures supervised case figure shows roc curves obtained applying threshold data set defines database enzyme hit least first three digits number correct contrast scalar performance measures table roc curve gives information quality ranking positions immediately clear supervised ranking outcompetes unsupervised ranking former curves closer upper left corner typically curves part beginning line high slope showing certain fraction relevant objects detected high sensitivity specificity fraction detected nearly without mistakes increases supervised learning step indicated higher offset curves clear lpcs roc curve next section curve usually part smaller average slope indicating point becomes harder detect signal unsupervised curves nearly straight line means point detection catalytically similar enzymes essentially random supervised curves still concave shape second part shows relevant enzymes still detected piece table also clear supervised ranking usually scores worse map ndcg compared auc ndcg performance sometimes even decreases learning model easily explained fact model optimizes quality complete ranking contrast top assessed map ndgc note top functionally similar enzymes number likely detected based similarity alone hence training model might required perform good section one see learning effect nicely figure similarity measure quality supervised ranking top measure worse measures indicated low ndcg overall ranking indicated auc quite good comparison lower part compensates bad ranking top depending application top general ranking might interest related work since comparison enzymes become important task functional bioinformatics vast number similarity measures proteins proposed far mentioned introduction reliable method unsupervised unsupervised lpcs unsupervised mcs unsupervised prediction prediction prediction prediction prediction unsupervised cat similarity cat similarity cat similarity cat similarity cat similarity supervised supervised supervised lpcs supervised mcs supervised prediction prediction prediction prediction prediction cat similarity cat similarity cat similarity cat similarity cat similarity figure plots values used ranking data set one randomly chosen query example different populations denote groups formed subdividing database enzymes according number number digits share query given query database enzyme shows distribution values supervised unsupervised approach respectively nearly similarity measures one observe much better separation groups supervised approach focus geometry properties certain regions enzyme however methods based sequence fold usually exhibit much lower complexity also lead good results especially sequence identity certain threshold profunc regard interesting tool bulk different methods applied sequence alignment motif template search also comparison active sites biological function enzymes derived closest match different databases pdb uniprot procat finally returned program despite powerful approach becomes nevertheless inefficient runtimes several hours fro single protein since considered sizeable data set nearly pairwise similarity scores computed became impossible compare results profunc addition focusing individual enzymes one also take interactions account inferring function proteins close interaction network expected similar functions one try infer function unanotated protein looking neighbors similarly one also solve optimization problems global network maximizing number edges connect proteins sharing function approaches make use probabilistic graphical models markov random fields conceptually methods might also enrich predictions obtained unsupervised approach usually consider cavity binding site information predict function proteins conclusion paper recast annotation problem conditional ranking problem shown retrieval enzymes functionality substantially improved applying supervised ranking method takes advantage ground truth numbers training phase contrast traditional methods rely heavily notion similarity search functionally related enzymes online databases methods lead unsupervised approach annotations taken account focused specifically similarity measures benefits compared approaches demonstrated previous work although method work meaningful similarity measure defined enzymes experiments could demonstrate considerable improvement quality overall ranking results influenced type data used way sup sup lpcs sup mcs sup sup unsup unsup lpcs unsup mcs unsup unsup average true positive rate roc curve different enzyme similarity measurements data set false positive rate figure receiver operating characteristic curves unsupervised supervised ranking methods data set enzyme considered functionally similar query first three digits number identical query ranking evaluated indicating optimal method highly dependent specific problem setting nevertheless supervised ranking algorithm outperformed unsupervised ranking algorithm similarities performance measures considered unsupervised approach succeeded quite well returning exact matches query hierarchical structure numbers better preserved rankings predicted supervised approach supervised ranking interpreted powerful alternative retrieval methods traditionally used bioinformatics acknowledgements bdb acknowledge support ghent university mrp bioinformatics nucleotides networks gratefully acknowledge financial support german research foundation dfg loewe research center synthetic microbiology marburg supported work academy finland grant respectively references friedberg automated protein function genomic challenge briefings bioinformatics vol enzyme function discovery structure vol altschul gish miller myers lipman basic local alignment search tool journal molecular biology vol altschul madden zhang zhang miller lipman gapped blast psiblast new generation protein database search programs nucleic acid research vol leslie eskin noble spectrum kernel string kernel svm protein pacific symposium biocomputing leslie eskin cohen weston noble mismatch string kernels discriminative protein classification bioinformatics vol mar powers copeland germer mercier ramanathan revesz comparison protein active site structures functional annotation proteins drug design proteins structure function bioinformatics vol rost enzyme function less conserved anticipated journal molecular biology vol chalk worth overington chan pdblig classification small molecular protein binding protein data bank journal medical chemistry vol kinoshita murakami nakamura prediction functional sites proteins searching similar electrostatic potential molecular surface shape nucleic acid research vol suppl thornton todd milburn borkakoti orengo structure function approaches limitations nature structural biology vol suppl kotera okuno hattori goto kanehisa computational assignment numbers analysis enzymatic reactions journal american chemical society vol egelhofer schomburg schomburg automatic assignment numbers plos computational biology vol shatsky niussinov wolfson method simultaneous alignment multiple protein structures proteins structure function bioinformatics vol shatsky common point set problem applications protein structure analysis phd thesis tel aviv university tel aviv israel harrison pearl sillitoe slidel mott thornton orengo recognizing fold protein structure bioinformatics vol martin orengo hutchinson jones karmirantzou laskowski mitchell taroni thornton protein folds functions structure vol july babbitt gerlt understanding enzyme superfamilies journal biological chemistry vol gerlt babbitt divergent evolution enzymatic function mechanistically diverse superfamilies functionally annual review biochemistry vol osadchy kolodny maps protein structure space reveal fundamental relationship protein structure function proceedings national academy sciences united states america vol kristensen ward lisewski erdin chen fofanov kimmel kavraki lichtarge prediction enzyme function based templates evolutionarily important amino acids bmc bioinformatics vol erdin lisewski lichtarge protein function prediction towards integration similarity metrics current opinion structural biology vol apr laskowski luscombe swindells thornton protein clefts molecular recognition function protein science vol sperandio miteva camproux villoutreix druggable pockets binding site centric chemical space paradigm shift drug discovery drug discovery today vol andersson chen linusson mapping cavities proteins proteins vol weisel kriegl schneider architectural repertoire pockets protein surfaces chembiochem vol weber casini heine kuhn supuran scozzafava klebe unexpected nanomolar inhibition carbonic anhydrase celecoxib new pharmacological opportunities due related binding site recognition journal medical chemistry vol huan bandyopadhyay wang snoeyink prins tropsha comparing graph representations protein structure mining packing motifs journal computational biology journal computational molecular cell biology vol borgwardt ong vishwanathan smola kriegel protein function prediction via graph kernels bioinformatics vol suppl june kernels structured data singapore world scientific shervashidze efficient graphlet kernels large graph comparison proceedings international conference artificial intelligence statistics aistats vol vacic ans lonardi radivojac graphlet kernels prediction functional residues protein structures journal computational biology vol shatsky nussinov wolfson multiple common point set problem application molecule binding pattern detection journal computational biology vol fober glinca klebe superposition alignment labeled point clouds transactions computational biology bioinformatics vol weill rognan comparison druggable binding sites journal chemical information modeling vol fober mernberger klebe fingerprint kernels protein structure comparison molecular informatics vol preference learning springer rathke hansen brefeld structrank new approach virtual screening journal chemical information modeling vol agarwal dugar sengupta ranking chemical structures drug discovery new machine learning journal chemical information modeling vol may weston eliseeff zhou leslie noble protein ranking local global structure protein similarity network proceedings national academy science vol kuang weston noble leslie protein ranking network propagation bioinformatics vol pahikkala tsivtsivadze airola boberg efficient algorithm learning rank preference graphs machine learning vol joachims support vector method multivariate performance measures proceedings international conference machine learning bonn germany dobson doig predicting enzyme class protein structure without journal molecular biology vol rousu saunders szedmak learning hierarchical multilabel classification models journal machine learning research vol sokolov method prediction protein function proceedings international workshop machine learning systems biology arakaki huang skolnick enzyme function inference combined approach enhanced machine learning bmc bioinformatics vol hendlich bergner klebe relibase design development database comprehensive analysis proteinligand interactions journal molecular biology vol schmitt kuhn klebe new method detect related function among proteins independent sequence fold homology journal molecular biology vol bunke shearer graph distance metric based maximal common subgraph pattern recognition letters vol sanfeliu distance measure attributed relational graphs pattern recognition ieee transactions systems man cybernetics vol weskamp kuhn klebe multiple graph alignment structural analysis protein active transactions computational biology bioinformatics vol neuhaus bunke bridging gap graph edit distance kernel machines new jersey world scientific borgwardt ong schonauer vishwanathan smola kriegel protein function prediction via graph kernels bioinformatics vol alt guibas discrete geometric shapes matching interpolation approximation survey tech handbook computational geometry ueda akutsu perret vert graph kernels molecular relationship analysis support vector machines journal chemical information modeling vol deza deza encyclopedia distances heidelberg germany springer beyer schwefel evolution strategies comprehensive introduction natural computing vol hoffmann zaslavskiy vert stoven new protein binding pocket similarity measure based comparison clouds atoms application ligand prediction bmc bioinformatics vol fober mernberger klebe evolutionary construction multiple graph alignments structural analysis bioinformatics vol kabsch solution best rotation relate two sets vectors acta crystallographica vol fober mernberger moritz comparative analysis protein active sites german conference bioinformatics halle saale germany smith waterman identification common molecular subsequences journal molecular biology vol pahikkala waegeman tsivtsivadze salakoski baets learning intransitive reciprocal relations kernel methods european journal operational research vol pahikkala airola stock baets waegeman efficient regularized algorithms conditional ranking relational data machine learning vol smola learning kernels support vector machines regularisation optimization beyond mit press waegeman pahikkala airola salakoski stock baets framework learning graded relations data ieee transactions fuzzy systems vol noble kernel methods predicting interactions bioinformatics vol june kashima oyama yamanishi tsuda pairwise kernels efficient alternative generalization pakdd theeramunkong kijsirikul cercone eds vol lecture notes computer science springer vert qiu noble new pairwise kernel biological network inference support vector bmc bioinformatics vol herbrich graepel obermayer large margin rank boundaries ordinal regression advances large margin classifiers smola bartlett schuurmans eds mit press chapelle zien learning mit press waegeman baets boullart roc analysis ordinal regression learning pattern recognition letters vol cumulated evaluation techniques acm transactions information systems tois vol varma simon bias error estimation using model selection bmc bioinformatics vol albert dasgupta dondi kachalo sontag zelikovsky westbrooks vol geurts touleimat dutreix inferring biological networks output kernel trees bmc bioinformatics vol laskowski watson thornton profunc server predicting protein function stucture nar vol schwikowski uetz fields network interactions yeast nature biotechnology vol vazquez flammini maritan vespignani global protein function prediction proteinprotein interaction networks bioinformatics vol karaoz murali letovsky zheng ding cantor kasif annotation using evidence integration networks proceedings national academy sciences vol deng zhang mehta chen sun prediction protein function using interaction data journal computational biology vol letovsky kasif predicting protein function interaction data probabilistic approach bioinformatics vol
5
replace line paper identification number edit based stability criterion voltage source converters huanhai xin ziheng wei dong zhen wang leiqi zhang output impedance matrices voltage source converters vscs widely used power system stability analysis regardless impedance modeled always exist coupling terms impedance matrix makes system mimo system practical approximation coupling terms generally omitted stability criterion resultant siso system applicable however handling may result analytical errors letter proposes new stability criterion based equivalent siso system introducing concept impedances completely keep coupling terms effects pll parameters system stability studied based proposed criterion effectiveness proposed criterion verified hil simulation based platform index converters impedance modeling stability oscillation introduction voltage source converters vscs widely used renewable energy integration transmission systems however interaction power electronic controllers transmission lines lead power oscillations poses new challenges power system stability analyzing oscillation problems introduced power electronic devices methods widely used vsc modeled output impedance connected source grid represented similarly system impedance vsc grid mathematically written matrix relates voltage vector current vector currently methods vscs classified two categories according reference frames impedance matrices formulated synchronous frame frame based methods work jointly supported national key research development program national science foundation china science technology project yunnan electric power company yndw huanhai xin ziheng wei dong zhen wang leiqi zhang college electrical engineering zhejiang university hangzhou china email xinhh stationary frame abc frame based methods frame usually exist coupling terms elements impedance matrices vscs grid makes system mimo system usually generalized nyquist stability criterion gnsc used stability performance analysis methods similarly abc frame coupling terms also exist positive negative sequence impedances vsc system grid although two sequence impedances symmetrical grid strictly decoupled impedance matrix diagonal impedance matrix vsc usually coupling terms practical approximation coupling terms generally omitted stability criterion resultant siso system applicable unfortunately ignoring coupling terms may lead inaccurate analysis results mirror frequency coupled system letter presents based stability criterion gisc vscs mathematically manipulating characteristic equation system transformed equivalent siso system regarded based series circuit essence instability system explained resonance equivalent circuit evaluated classical nyquist stability criterion vsc model vsc considered letter shown fig converter uses conventional lcl filter network includes lline filter capacitor lline denotes total inductance filter transmission line converter controlled vector controller based pll current controller vsc voltage vff decoupling terms positions phasors reference frames system shown fig frame represents rotating frame introduced pll frame represents synchronous rotating frame steady state frame aligned frame paper focuses oscillations induced pll inner current loop similar following assumptions considered simplify problem time delay sampling circuits pwm neglected due tiny time scales dynamics filter replace line paper identification number edit vff neglected since oscillation frequency study much lower frequency filter vsc side sdq vdc node pwm abc abc abc sdqref grid side lline abc dqref current control pll pll fig block diagram converter pll rotating frame global frame rotating frame local frame synchronous speed speed obtained pll fig reference frames phasor positions emphasized constraint power factor vsc vsc interface statcom renewable energy usually dynamic model vsc polar coordinate simplicity analysis resistances filter inductor transmission line neglected frame dynamic model vsc including pll filter inductor current controller represented pll pll sdref dref sqref qref gpll transfer functions pll current controller respectively meanings symbols shown fig fig obtain model following steps followed linearizing solving linearized equations dynamic model frame obtained model transformed frame using angle relationships shown fig model frame written form polar coordinate follows gpll gpll subscript denotes values detailed derivation shown appendix dynamic model grid polar coordinate according topology shown fig model network formulated follows yline expressions yline shown appendix particular output filter inductor iii system definition follows vsc model grid model common structure shown follows grid controller based ygi defined follows defined inverse vsc grid follows vsc calculated follows vsc vsc vsc note matrix structure matrix since obtained diag diag mentioned section matrix exists coupling terms thus system becomes mimo system definition grid obtained follows grid grid grid note tyt implies admittance changed synchronous frame stationary frame corresponds positive negative sequence admittance based stability analysis section derive characteristic equations system analyze stability follows characteristic equation expressed det det determinant function since invertible equivalent det equivalent det implies det special case indicating vsc network resonance point focus paper excluding case equivalent replace line paper identification number edit det follows characteristic equation system simplified definitions rearrange vsc grid grid explained series circuit vsc grid therefore oscillation vsc regarded series resonance equivalent circuit system described siso system thus complexity analysis reduced distinctly worth noting vsc grid usually assumed stable respectively analyzing interaction vsc grid also assume vsc stable connected ideal grid pole right half plane consequently follows nyquist criterion gisc nyquist curve encircle point system stable note although form gisc looks similar criterion given meanings different one hand transfer function used plot nyquist curve zsource zload source load impedances seen interface equivalent circuit hand criterion performs well system siso system may errors resulting coupling terms systems contrast gisc uses special structure impedance matrix established equivalent siso system mimo system without approximation moreover distance nyquist curve represents stability margin gisc also used guide oscillation suppression example influences pll parameters system stability analyzed tuned via gisc studied system shown fig lline set parameters vsc listed table nyquist plots transfer function three groups pll parameters proportional gain ppll integral gain kipll given fig shows neighborhood pll parameters given table larger proportional gain ppll positive effect system stability see curves fig larger integral gain kipll negative effect see curves reference signals current control loop assumed constant dref qref stability system analyzed using gisc nyquist curve shown fig point encircled initially thus system stable additional inductor plot shows system two poles right half plane system unstable hil simulation results fig also verify analysis oscillation supersynchronous oscillation occur impedance increased imaginary axis det real axis fig nyquist plots transfer function different pll parameters imaginary axis det lline lline real axis fig nyquist plots transfer function different lline voltage phase current phase divergent oscillation experimental results hil simulation vsc conducted vsc grid simulated converter controlled controller based impedance transmission line changed voltage spectrum fig instability due change inductance replace line paper identification number edit table parameters vsc symbol gpll linearized state equation lline expressed description base value power base value voltage inductance inverter side filter transfer function current controller transfer function pll linex liney line line value denote power factor angle node lline converted polar coordinates follows line line line yline yline conclusion based stability criterion proposed stability analysis gridconnected vsc system rigorous mathematical derivation mimo system composed vsc grid transformed equivalent siso system composed grid vsc consequently oscillation explained series resonance equivalent circuit gisc used stability analysis system gisc based study shows significant effects pll parameters system stability hil simulation validates effectiveness proposed criterion future work extend criterion systems multiple vscs use criterion guide vsc control design linearizing model inverter obtained frame rewritten form polar coordinates follows sdref dref sqref qref pll cos sin sin cos sin cos cos sin follows assumptions sdref sqref dref qref satisfied thus following equation obtained pll considering angles satisfy pll pll shown fig pll expressed pll gpll gpll terms thus eliminating pll holds gpll gpll subscript denotes values appendix linearized state equation inductor reference frame expressed nodes next inductor since voltage ideal grid considered constant sin lline sin cos nodes next inductor denote power factor angle node since voltage ground also constant linearized state equation expressed sin sin cos references similarly linearized state equation capacitor coordinates expressed appendix pll pll pll blaabjerg chen kjaer power electronics efficient interface dispersed power generation systems ieee trans power vol wen boroyevich burgos mattavelli shen analysis impedance inverters ieee trans power vol wen dong boroyevich burgos mattavelli shen analysis stability paralleled converters ieee trans power vol cespedes sun impedance modeling analysis connected converters ieee trans power vol mar bakhshizadeh wang blaabjerg couplings phase domain impedance modeling converters ieee trans power vol sun stability criterion inverters ieee trans power rygg molinas chen cai modified sequence domain impedance definition equivalence impedance definition stability analysis power electronic systems ieee emerging sel topics power vol zhao yuan yan voltage dynamics current control weak grid ieee trans power vol jul
3
jun free topological groups fucai lin shou lin chuan liu abstract space called tychonoff necessary sufficient condition function continuous restriction compact subset continuous paper mainly discuss free topological groups generalize results yamada free topological groups introduction recall called necessary sufficient condition subset closed closed every compact subset generalizing metrizability studied intensively topologists analysts space called tychonoff necessary sufficient condition function continuous restriction compact subset continuous clearly every tychonoff converse false indeed uncountable cardinal power see theorem problem widely used study topology analysis category see results research presented two separate papers paper mainly extend results seek applications study free abelian topological groups current paper shall deter free topological groups extend results yamada free topological groups paper organized follows section introduce necessary notation terminologies used rest paper section investigate property free topological groups generalize results yamada section pose interesting questions class free topological groups still unknown preliminaries section introduce necessary notation terminologies throughout paper topological spaces assumed tychonoff unless otherwise explicitly stated first let set positive integers first countable order space always denote set points undefined notation terminologies reader may refer let topological space subset closure denoted moreover called bounded every continuous function defined bounded space called provided subset closed closed compact subset space called tychonoff necessary sufficient condition function continuous restriction compact subset continuous note every tychonoff subset called sequential neighborhood mathematics subject classification primary secondary key words phrases stratifiable space space free topological group first author supported nsfc nos natural science foundation fujian province nos china fucai lin shou lin chuan liu sequence converging eventually subset called sequentially open sequential neighborhood points subset called sequentially closed sequentially open space called sequential space sequentially open subset open space said exists sequence converges definition let topological space subset called point continuous function obvious open converse true open subsets tychonoff spaces subset called functional neighborhood set continuous function normal neighborhood closed subset functional definition let cardinal indexed family subsets topological space called point set countable compact subset set countable locally finite point neighborhood set finite compact subset set finite strongly set neighborhood family strictly set functional neighborhood family definition let topological space cardinal indexed family subsets topological space called fan precisely family locally finite fan called strong resp strict set neighborhood resp functional neighborhood family sets belong fixed family subsets fan called particular closed fan called clearly following implications strict fan strong fan fan let family subsets space called every compact subset arbitrary open set containing finite subfamily recall space resp finite resp countable recall space said continuous closed image metric space definition topological space stratifiable space open subset one assign sequence open subsets whenever note space stratifiable let space throughout paper copy every denotes subspace consists free topological groups words reduced length respect free basis let neutral element empty every element also called word form word called reduced contain pair consecutive symbol form follows word reduced different neutral element particular element distinct neutral element uniquely written form support defined supp given subset put supp supp every let natural mapping defined free topological groups section investigate free topological groups generalize results yamada recently banakh proved space indeed obtained result class weaker spaces however discuss following quesiton question let space first give following theorem gives complementary result banakh lemma let normal proof compact subset contained corollary hence follows lemma theorem let paracompact proof since paracompact follows theorem also paracompact hence normal legal apply lemma finish proof next shall show arbitrary metrizable space implies see theorem first prove two technic propositions prove need description neighborhood base obtained let obviously subset clopen normal subgroup easy see represented let set continuous pseudometrics space take arbitrary let inf uspenskii proved fucai lin shou lin chuan liu continuous neighborhood base moreover proposition stratifiable separable discrete proof assume contrary neither separable discrete contains space closed subset convergent sequence limit point discrete closed subset since discrete closed subset choose discrete family open subsets suchsthat may assume arbitrary since stratifiable closed follows homeomorphic closed subgroup hence closed subspace next shall show contains strict contradiction indeed since normal suffices construct strong choose function bijection let claim family strong divide proof following three statements set closed fix arbitrary suffices show set closed let suppfn obvious closed discrete subspace follows homeomorphic closed subgroup thus closed subspace since discrete set closed thus closed family strong induction choose two families open neighborhoods satisfy following conditions let obviously contains since open follows corollary open claim family suppose exist compact subset increasing sequence unk since stratifiable paracompact closure set supp compact however since unk exist hence supp supp wmk wnk therefore set supp intersects element family since set infinite set infinite supp intersects infinitely many contradiction compactness supp therefore family therefore family strong family locally finite point thus locally finite free topological groups indeed suffices show shall show continuous pseudometrics since sequence converges therefore uncountable set choose infinitely many predecessors since infinite set exist word furthermore hence family locally finite point proposition metrizable space locally compact proof assume contrary locally compact exists closed hedgehog subspace closed discrete subset closed discrete subset neighborhood base proposition space separable next shall show contains strict contradiction since theorem subspace normal hence suffices show contains strong furthermore follows proposition family subsets strongly hence suffices show contains put xzn obvious closed furthermore follows proof proposition thus family locally finite point next claim family suppose exist compact subset two sequences eni closure set supp compact since paracompact since family disjoint one sequences infinite set infinite set closed discrete set zni contained infinite set supp contradiction since supp compact finite set exists xni obviously closed discrete set xni infinite set contains supp contradiction show one main theorems paper theorem metrizable space following equivalent fucai lin shou lin chuan liu space locally compact separable discrete proof equivalence showed obvious propositions theorem natural ask following question question let metrizable space note answer question negative indeed arbitrary metrizable space since closed mapping space thus following theorem however question still unknown proposition metrizable space locally compact compact proof assume contrary neither locally compact set points compact contains closed subspace every closed discrete subset neighborhood base convergent sequence limit contained open subset family discrete order obtain contradiction shall construct strict choose open neighborhood oni point family oni discrete oni oni let vni wni oni vni wni two arbitrary open neighborhoods respectively vni vni obviously closed follows corollary open neighborhood author showed hence family locally finite complete proof suffices show family suppose exists compact subset two sequences similar proof proposition obtain contradiction theorem metrizable space following equivalent space locally compact compact proof equivalence showed implication obvious proposition following theorem proved free topological groups theorem let stratifiable space satisfies one following conditions either metrizable topological sum space since contains closed copy follows theorem following theorem theorem let stratifiable following equivalent either metrizable topological sum following proposition shows replace theorem first recall special space let proposition subspace proof suffices show contains strict follows theorem find two families infinite subsets finite sets finite put suffices show following three statements family strictly since space follows theorem also paracompact hence paracompact thus normal hence suffices show family strongly let put obviously since open follows corollary open claim family suppose exist compact subset subsequence unk choose arbitrary point unk since paracompact follows closure set supp compact therefore exists supp supp since exists infinite set contradiction finite closed fucai lin shou lin chuan liu fix arbitrary next shall show closed let supp closed discrete subset since metriable follows homeomorphic closed subgroup hence closed subspace since discrete set closed thus closed family locally finite point indeed suffices show shall show since continuous choose function put condition families easy see exist choose let hence theorem let metrizable space separable proof suppose contains closed copy use notation theorem since metrizable exists discrete family open subsets arbitrary choose open neighborhoods point respectively put similar proof theorem show family open subsets hence family strict contradiction open questions section pose interesting questions class free topological groups still unknown theorems natural pose following question question let metrizable space yamada also made following conjecture yamada conjecture subspace set points metrizable space compact indeed know answer following question question set points metrizable space compact particular following question question let convergent sequence limit point uncountable discrete space authors showed closed subspace stratifiable however following two questions still open free topological groups question closed subgroup topological group question subspace topological group acknowledgements authors wish thank reviewers careful reading preliminary version paper providing many valuable suggestions references arhangel okunev pestov free topological groups metrizable spaces topology arhangel tkachenko topological groups related structures atlantis press paris world scientific publishing pte hackensack banakh gabriyelyan closure class separable metrizable spaces monatshefte mathematik banakh fans applications general topology functional analysis topological algebra blasco proc amer math borges stratifiable proc amer math ceder generalizations metric spaces pacific engelking general topology revised completed edition heldermann verlag berlin gruenhage generalized metric spaces kunen vaughan eds handbook topology elsevier science publishers amsterdam kelley general topology new noble continuity functions cartesian products trans amer math weese discovering modern set theorey graduate studies mathematics czech math lin lin liu sequential fans applications free abelian topological groups lin covers mappings second edition beijing china science press lin note general topoloy michael pacific math yamada spaces free topological groups proc amer math yamada natural mappings free topological groups metrizable spaces topology uspenskii free topological groups metrizable spaces math ussr izv fucai lin school mathematics statistics minnan normal university zhangzhou china address linfucai shou lin institute mathematics ningde teachers college ningde fujian china address chuan liu department mathematics ohio university zanesville campus zanesville usa address
4
binary hypothesis testing byzantine sensors fundamental security efficiency jan xiaoqiang jiaqi yilin abstract paper studies binary hypothesis testing based measurements set sensors subset compromised attacker measurements compromised sensor manipulated arbitrarily adversary asymptotic exponential rate probability error goes zero adopted indicate detection performance detector practice expect attack sensors sporadic therefore system may operate sensors benign extended period time motivates consider detection performance detector probability error attacker absent defined efficiency detection performance attacker present defined security first provide fundamental limits propose detection strategy achieves limits consider special case security efficiency words detection strategy achieve maximal efficiency maximal security simultaneously two extensions secure hypothesis testing problem also studied fundamental limits achievability results provided subset sensors namely secure sensors assumed equipped better security countermeasures hence guaranteed benign detection performance unknown number compromised sensors numerical examples given illustrate main results index terms hypothesis testing security secure detection efficiency byzantine attacks fundamental limits xiaoqiang ren jiaqi yan yilin school electrical electronics engineering nanyang technological university singapore emails xren ylmo corresponding author january draft ntroduction background motivations network embedded sensors pervasively used monitor system vulnerable malicious attacks due limited capacity sparsely spatial deployment attacker may get access sensors send arbitrary messages break communication channels sensors system operator tamper transmitted data integrity attacks motivated many researches infer useful information corrupted sensory data secure manner paper follow direction focus performance inference algorithm attacker absent performance attacker knowledge inference algorithm present define two metrics efficiency security characterize performance hypothesis testing algorithm detector two scenarios respectively analyze security efficiency work contributions consider sequential binary hypothesis testing based measurements sensors assumed sensors may compromised attacker set chosen attacker fixed time adversary manipulate measurements sent compromised sensors arbitrarily according kerckhoffs principle security system rely obscurity assume adversary knows exactly hypothesis testing algorithm used fusion center hand fusion center system operator knows number malicious sensors know exact set compromised sensors time fusion center needs make decision underlying hypothesis based possibly corrupted measurements collected sensors time given hypothesis testing algorithm fusion center measurements fusion rule probability error investigated asymptotic exponential decay rate error denote security system adopted indicate detection performance hand attacker absent detection performance hypothesis testing algorithm asymptotic exponential decay rate error probability denoted efficiency focus efficiency security particular interested characterizing fundamental limits efficiency security detectors achieve limits january draft main contributions work summarized follows best knowledge first work studies efficiency security inference algorithm mild assumptions probability distributions measurements provide fundamental limits efficiency security corollaries furthermore present detectors low computational complexity achieve limits theorem therefore system operator easily adopt detectors proposed obtain best efficiency security interestingly cases gaussian random variables variance different mean maximal efficiency maximal security achieved simultaneously theorem similar results fundamental limits detectors possess limits established several different problem settings section shows analysis techniques insightful may helpful future related studies related literature sensor referred byzantine sensor messages fusion center fully controlled recently detection byzantine sensors studied extensively among took perspective attacker aimed find effective attack strategy focused designs resilient detectors formulated problem way main results critical fraction byzantine sensors blinds fusion center counterpart breakdown point robust statistics effective attack strategy minimizes asymptotic error exponent setting divergence since byzantine sensors assumed generate independent identical distributed data resulting measurements minimum divergence corresponding robust detector coincide similar results obtained considering probability error bayesian setting asymptotic bayesian performance metric chernoff information respectively authors focused computation efficient algorithms determine optimal parameters procedure large scale networks different fractions byzantine sensors two types sensors assumed authors thereof proposed practice manipulate data sensor adversary may attack sensor node break communication channel sensor fusion center paper distinguish two approaches january draft maximum likelihood procedure based iterative expectation maximization algorithm simultaneously classifying sensor nodes performing hypothesis testing authors showed optimal detector threshold structure fraction byzantine sensors less game formulated among equilibrium point attack strategy detector obtained computation efficient nearly optimal equilibrium point exact equilibrium point certain cases obtained numerical simulations used study equilibrium point byzantine sensors assumed generate malicious data independently work assumes byzantine sensors may collude collusion model reasonable since attacker malicious arbitrarily change messages sensors controlls notice also compared independence model collusion model complicates analysis significantly unlike sensors send binary messages work assumes measurements benign sensor take value since binary message model simplifies structure corrupted measurements hence implicitly limits capability attacker easier dealt work differs follows authors focused one time step scenario analysis thus fundamentally different challenging contrary work hypothesis testing performed sequentially asymptotic regime performance metric chernoff information concerned similar setting work considered recent work however focused equilibrium point performance security efficiency obtained equilibrium detection rule merely one point admissible set characterized paper finally remark aforementioned literature mainly focuses designing algorithms adversarial environment however algorithms may perform poorly absence adversary comparing classic detector naive bayes detector fundamental question seek answer paper design detection strategy performs optimally regardless whether attacker present organization section formulate problem binary hypothesis testing adversarial environments attack model performance indices notion efficiency security defined sake completeness give brief introduction large deviation theory section iii key supporting technique later january draft analysis main results presented section first provide fundamental limits efficiency security propose detectors achieve limits last show maximal efficiency maximal security achieved simultaneously cases two extensions investigated section providing numerical examples section conclude paper section vii notations set nonnegative real numbers set positive integers cardinality finite set denoted set int denotes interior sequence denote average time vector support denoted supp set indices nonzero elements supp roblem ormulation consider problem detecting binary state using sensors measurements define measurement time row vector vector measurements time time rmk scalar measurement sensor time simplicity define given assume measurements independent identically distributed probability measure generated denoted denoted words set probability equals equals denote probability space generated measurements yil superscript stands original contrasted corrupted measurements january draft expectation taken respect denoted assume absolutely continuous respect hence ratio well defined log derivative define rmk detector time mapping measurement space interval system makes decision system flips biased coin choose probability probability system strategy defined infinite sequence detectors time attack model let manipulated measurements received fusion center time bias vector injected attacker time following assumptions made attacker among assumption essence limitation pose assumption spare attack exists index set supp furthermore system knows number know set remark assumption pose restrictions value yia sensor compromised time bias injected data compromised sensor arbitrary assumption says attacker compromise sensors time practical assume attacker possesses limited resources number compromised sensors upper bounded since otherwise would pessimistic problem becomes trivial quantity might determined priori knowledge quality sensor alternatively quantity may viewed design parameter indicates resilience level system willing introduce details january draft remark notice also since attacks set compromised sensors attack strategy concerned performance metric introduced shortly equivalent replace cardinality requirement note also assumed malicious sensor nodes known system operator moreover set compromised sensors assumed fixed time notice assume set compromised sensors fixed cardinality exists set like bound compromised sensors attacker would required abandon sensor nodes compromised sensible notice assumed set sensors fixed well also note though work concerned asymptotic performances security efficiency introduced later numerical simulations section show algorithm indeed perform quite well setup actually horizon problem considered time required attacker control benign sensor large enough reasonable assume set compromised sensors fixed fact exactly sparse attack model assumption widely adopted literature dealing byzantine sensors state estimation quickest change detection finally note assume pattern bias yia malicious bias injected may correlated across compromised sensors correlated time compared independence assumption assumption improves effectiveness attacker realistic sense attacker malicious whatever wants remark parameter also interpreted many bad sensors system willing tolerate design parameter system operator general increasing increase resilience detector attack however shown rest paper larger may result conservative design likely cause performance degradation normal operation sensor compromised assumption model knowledge attacker knows probability measure true state january draft knowledge sensor attacker develop probability measure obtain true state attacker may deploy sensor network though might difficult satisfy practice assumption fact conventional literature concerning attacks nevertheless assumption accordance kerckhoffs principle assumption measurement knowledge time attacker knows current historical measurements available compromised sensors since attacker knows true measurement compromised sensor may set fake measurement arrived fusion center value wants injecting yia one may also verify results paper remain even attacker strong enough time knows measurements sensors admissible attack strategy causal mapping attacker available information bias vector satisfies assumption formalized follows let define true measurements compromised sensors time yin similar defined manipulated bias vector time bias chosen function attacker available information time satisfies assumption denote admissible attacker strategy notice since time input variable available measurements increasing respect time definition exclude attack strategy denote probability space generated manipulated measurements expectation taken respect probability measure denoted function possibly random example given available information adversary flip coin decide whether change measurement january draft asymptotic detection performance given strategy system attacker probability error time defined notice could take value hence expected value used compute probability error paper concerned scenario result let define max words indicates probability error considering possible sets compromised sensors state given detection rule attack strategy notice also accordance assumption set equation fixed time ideally time system wants design detector minimize however task hardly accomplished analytically since computation probability error usually involves numerical integration thus article consider asymptotic detection performance hope provide insight detector design define rate function lim inf log clearly function system strategy attacker strategy write indicate relations since indicates rate probability error goes zero system would like maximize order minimize detection error contrary attacker wants decrease increase detection error interested problems practice attacker may present consistently result system operate extended period time sensors benign thus natural question arises detection rule decent performance regardless presence attacker fundamental security efficiency words detector good presence adversary bad benign environment paper devoted answering question january draft informally performance detection rule attacker referred efficiency performance attacker provided attacker knows detection rule used system present referred security mathematically speaking given system strategy denote efficiency security respectively formalized follows inf zero vector iii reliminary arge eviation heory section introduce large deviation theory key supporting technique paper proceed first introduce definitions let moment generating function random vector probability measure dot product let support finite define transform function log sup log theorem multidimensional theorem suppose sequence random vectors probability measure let empirical mean int probability satisfies large deviation principle closed lim sup log inf open lim inf january log inf draft esults technical preliminaries denote moment generating function ratio hypothesis exp exp furthermore define region finite transform log quantities defined similarly denote divergences apply multidimensional theorem avoid degenerate problems adopt following assumptions assumption int int assumption divergences assumptions following properties proof provided appendix theorem assumptions followings hold twice differentiable strictly convex strictly increasing strictly decreasing following equalities hold january draft fig illustration figure plotted assuming bernoulli distributed hypotheses since let define make presentation clear illustrate fig inverse functions defined follows max min let dmin min define dmin dmin min january draft fundamental limits ready provide fundamental limitations efficiency security proof provided appendix theorem detection rule following statements true max let dmin dmin remark theorem indicates maximum efficiency achieved detector maximum security achieved therefore less half sensors compromised implies detectors zero security case naive bayes detector optimal choice since optimal efficiency analysis becomes trivial therefore without notice assume rest paper notice fourth constraint theorem indicates security efficiency general cases maximum security efficiency may achieved simultaneously however section prove special case exist detectors achieve maximum security efficiency time notice strictly increasing decreasing therefore combining one obtains strictly decreasing dual version obtained follows let inverse function equality holds involutory function every dmin following two corollaries results follow straightforwardly theorem thus omit proofs january draft corollary suppose security detector satisfies maximum efficiency satisfies following inequality max min corollary suppose efficiency detector satisfies maximum security satisfies following inequality max min dmin achievability section propose detector achieves upper bounds corollaries let time algorithm denoted implemented follows remark discuss computational complexity detection rule computational complexity step notice quantity computed recursive fashion complexity step log compute first sort ascending order respectively sum first elements computational complexity step step fixed step computational complexity therefore total computational complexity time step log show performance proof provided appendix january draft algorithm hypothesis testing algorithm compute empirical mean likelihood ratio time time sensor compute compute following sum min min make decision next step otherwise make decision next step otherwise make decision make decision otherwise definition called admissible pair following inequalities holds defined corollary theorem let admissible pair efficiency security holds theorem means upper bounds corollaries achieved hence provide tight characterization admissible efficiency security pair illustrate shape admissible region fig remark optimal detector may necessarily unique sense may exist detectors one defined algorithm achieve efficiency january draft security efficiency fig achievable efficiency security region detector figure plotted assuming bernoulli distributed hypotheses shaded area admissible pair detector red dashed line function blue dotted line function security limits definition detectors achieving limits asymptotic performance however performance terms detection error may different planning investigate future work special case symmetric distribution subsection discuss case maximum security efficiency achieved simultaneously detector notice definition admissible pair know admissible pair hence detector defined section achieve maximum security efficiency january draft simultaneously words adding security deteriorate performance system absence adversary following theorem provides sufficient condition based first order derivative proof presented appendix sake legibility theorem holds therefore possesses maximal security also maximal efficiency notice whether sufficient condition satisfied merely depends probability distribution original observations independent number compromised sensors exists symmetry distribution sufficient condition satisfied specific exists constant borel measurable set one prove implies provide two examples pairs symmetric distributions follows bernoulli distributed probability probability satisfies following equation gaussian distributed xtension section consider two extensions problem settings discussed section january draft secure sensors consider subset secure sensors well protected compromised attacker would like study security efficiency secure sensors deployed let total sensors secure remaining sensors normal ones compromised adversary subsection take value necessarily satisfy settings section denote efficiency security detection rule case one obtains following results theorem theorem detection rule following statements true max let dmin dmin theorem proved manner appendix notice essential difference range statement second bullet due fact secure sensors compromised remark theorem one sees replacing normal sensors secure sensors change fundamental security efficiency however benefit secure sensors security improved also one notice gains deploying secure sensors intuitively case redundancy normal sensors enough furthermore detector fzss algorithm slight variation treats secure sensors separately achieves limits stated following theorem proved manner appendix january draft theorem pair satisfies max holds fzss fzss algorithm hypothesis testing algorithm fzss secure sensors compute sensor compute sensor compute minimum sum normal sensors min min make decision next step otherwise make decision next step otherwise make decision make decision otherwise unknown number compromised sensors previous section assume system attacked sensors compromised however practice exact number compromised sensors likely unknown subsection assume know estimated upper bound compromised sensors denoted let denote number sensors actually compromised therefore may take value section remark requirement equivalently replaced implicit assumption estimated upper bound tight number compromised sensors indeed therefore section may also interpreted tight upper bound number actually compromised senors january draft given detector denote dna detection performance number compromised sensor one following present pairwise dna propose algorithm achieve performance limits similar argument section adopted obtain results details omitted define min one obtains detector hold dna dna let admissible detection performance zna zna detector algorithm variation section denoted achieve performance dna zna umerical xamples asymptotic performance simulate performance detector proposed section efficiency security compare empirical results theoretical ones shown fig parameters fig used simulate security assumed following attack strategy january draft algorithm hypothesis testing algorithm initialization compute sensor compute two minima min min zna make decision stop zna make decision stop replace make decision make decision otherwise adopted attacker modifies observations compromised sensors every hand attack strategy holds every simulate performance high accuracy adopt importance sampling approach plot fig let notice theoretical performance coincides exactly fundamental limits fig therefore fig verifies algorithm indeed achieves fundamental limits performance proved algorithm optimal sense achieves fundamental security efficiency however notice security efficiency asymptotic performance metrics example show algorithm possesses quite nice performance well comparing naive bayes detector remark bayes detector strictly optimal optimal time horizon absence attackers security zero results fig chosen january draft security theoretical empirical efficiency fig comparison empirical theoretical performance detector fig illustrates algorithm detection performance comparable naive bayes detector attacker absent finitetime performance metric defined attacker absent detector naive bayes one note security result adopting secure detector increases security system introducing minimum performance loss absence adversary comparison detectors simulate detectors introduced later use sensor network model fig asymptotic performances efficiency security summarized table performances attacker absent fig table consistent statement algorithm achieves best security efficiency fig shows preferable adopt algorithm well respect performance attacker absent following present two detectors compared detail first detector equilibrium detection rule proposed cases january draft naive bayes time fig performance absence adversary detection rule shares spirit mean robust statistics first removes largest smallest ratios compares mean remaining ratios classic probability ratio test details detection rule denoted ftrim formalized follows ftrim smallest element empirical mean ratio time senor defined shown security efficiency ftrim ftrim ftrim since hold definition theorem one obtains security algorithm efficiency larger january draft therefore algorithm preferable since security achieves larger efficiency algorithm ftrim particular theorem efficiency gain algorithm certain cases next detector procedure studied assuming malicious sensor nodes generate fictitious data randomly independently probability compromised sensor flips binary message known procedure simple works follows time receiving binary messages fusion center makes decision otherwise let sequence thresholds used detector time infinity sequel denote detector fqom notice fqom naive bayesian detector minimizes weighted sum miss detection false alarm time weight determined clear fqom used fusion center attack always sending true state therefore time performance probability detection error detector fqom attacks follows fqom fqom reasonable set since otherwise detection error however challenging obtain optimal analytically minimize detection error numerical simulations varying time obtain approximate security optimal parameters simulate performance algorithm optimal parameters obtained used attacker absent january draft table asymptotic performances algorithm trimmed mean detector ftrim optimal procedure fqom ftrim fqom security efficiency ftrim fqom time fig performance ftrim fqom absence adversary vii onclusion uture work paper studied detection performance detector attacker absent termed efficiency detection performance attacker knowing detector present termed security setting binary hypothesis testing conducted based measurements set sensors compromised attacker measurements manipulated arbitrarily first provided fundamental limits efficiency security detector presented detectors possesses limits efficiency security therefore clear guideline balance efficiency security established system operator interesting point fundamental cases maximal efficiency maximal security achieved simultaneously maximal january draft efficiency security achieved without compromising security efficiency addition two extensions investigated secure sensors assumed first one detection performance beyond efficiency security concerned second one main results verified numerical examples investigating problem measurements benign sensors future direction ppendix roof heorem following lemma needed prove theorem lemma assumption hold following statement true exists small enough log log log log strictly convex derivative log log satisfy log log log log proof definition proves assuming convexity exponential function know exp january draft therefore proves domain log convex furthermore assumption int gives int hence int proves log small enough well known log infinitely differentiable int see exercise basic calculations give log log always holds log second derivative quantity strictly positive since divergence probability measure strictly positive assumption therefore log strictly convex domain strict convexity log proved similarly take derivative log yields log equation proved similarly ready prove theorem proof theorem define derivative log since log strictly convex know strictly increasing therefore inverse function well defined denote inverse function convexity log log log hence suppose log log log log january draft notice last term rhs equation hence prove log log take derivative second order derivative last inequality due fact log strictly convex thus second derivative strictly positive hence prove twice differentiable strictly convex notice prove also strictly increasing similarly prove properties combining prove equation proved similarly since sup log sup log sup log sup log conclude ppendix roof heorem proof divided four parts devoted one statements theorem part index set define following bayesian like detector january draft empirical mean ratio time sensor defined denote well known minimize average error probability recall defined notice log lim inf log lim inf hence attacker absent optimal sense rate defined maximized furthermore theorem gives therefore holds detector part part show proof construction construct attack strategy detection rule following inequality holds let attack strategy follows sensors compromised distributions flipped measurements sensors sensors compromised distributions flipped thus attack either sensors follow distribution sensors follow distribution words sensors different distributions different notice means optimality detection rule defined one obtains equation thus obtained part iii clear definitions holds part consider following product measures january draft measure generated attack flips distribution last sensors true hypothesis measure generated benign sensors true hypothesis let consider following problem given find detection rule minimized every let given log recall function defined bayesian decisiontheoretic detection theory solution problem let lim inf lim inf log log optimality detector following hold implies log log lim inf lim inf furthermore definitions yield log log lim inf lim inf january draft therefore detector following hold let evaluate let log theorem yields notice monotonicity implies holds therefore holds one thus obtains detector also easy see holds thus similarly one considers detection problem measures obtains detector equation follows equation january draft ppendix roof heorem theorem proved showing certain conditions satisfied lemma furthermore special structure conditions ensure attacks measurements sensors attack free environment belong certain set theorem applied proceeding need define following subsets definition define definition let define ball bal definition let define extended ball ebal bal definition extended balls clear ebal following inequality holds min combining definition know time output defined ebal output ebal ebal ebal january draft first need following supporting lemma lemma given optimal value following optimization problem given infm proof since nonnegative take value one equivalently rewrite infm nonnegativity equation equivalent inf obtain solution equation let fist focus following optimization problem minm denotes optimal value following show claim solution january whatever draft claim clearly holds following show claim correct equation trivial focus due convexity functions one obtains therefore without performance loss one may restrict solution set follows clear monotonicity holds thus proves notice decreasing respect fact yields equivalent minm concludes lemma lemma assume admissible pair following statements true bal bal ebal ebal ebal bal ebal bal proof suffices prove given bal convexity one obtains january draft second inequality follows fact increasing notice definition holds proof done proved similarly definition ebal need prove bal bal holds notice increasing respect thus lemma suffices prove true decreasing respect similar suffices prove bal bal holds lemma suffices prove equivalent prove follows definition fact decreasing respect proved similarly lemma one obtains straightforwardly following lemma lemma assume admissible pair following set inclusions true ebal ebal bal bal ready prove theorem proof theorem focus proof similar simpler approach used prove notice ebal lemma gives january draft attacks holds bal therefore log lim sup log bal inf lim sup second inequality holds theorem fact bal closed similarly ebal lemma one obtains lim sup log follows proof thus complete ppendix roof heorem define following two functions dmin dmin following two lemmas lemma convex furthermore following equality holds proof equation follows directly prove convexity first need prove convex concave dmin notice inverse function twice differentiable chain rule therefore since strictly convex strictly decreasing increasing convex concave dmin january draft convexity follows fact composition convex increasing decreasing function convex concave function convex ready prove theorem proof chain rule know therefore convexity know similarly one prove hence definition implies holds achieves maximum security efficiency simultaneously eferences rawat anand chen varshney collaborative spectrum sensing presence byzantine attacks cognitive radio networks ieee transactions signal processing vol sinopoli secure estimation presence integrity attacks ieee transactions automatic control vol teixeira sou sandberg johansson secure control systems quantitative risk management approach ieee control systems vol shannon communication theory secrecy systems bell labs technical journal vol marano matta tong distributed detection presence byzantine attacks ieee transactions signal processing vol kailkhura han brahma varshney distributed bayesian detection presence byzantine data ieee transactions signal processing vol asymptotic analysis distributed bayesian detection byzantine data ieee signal processing letters vol abdelhakim lightfoot ren distributed detection mobile access wireless sensor networks byzantine attacks ieee transactions parallel distributed systems vol january draft soltanmohammadi orooji decentralized hypothesis testing wireless sensor networks presence misbehaving nodes ieee transactions information forensics security vol soltanmohammadi fast detection malicious behavior cooperative spectrum sensing ieee journal selected areas communications vol hespanha sinopoli resilient detection presence integrity attacks ieee transactions signal processing vol vamvoudakis hespanha sinopoli detection adversarial environments ieee transactions automatic control vol abrardo barni kallas tondi framework optimum decision fusion presence byzantines ieee transactions information forensics security vol yan ren sequential detection adversarial environments online http huber robust statistics springer robust version probability ratio test annals mathematical statistics vol viswanathan aalo counting rules distributed detection ieee transactions acoustics speech signal processing vol dempster laird rubin maximum likelihood incomplete data via algorithm journal royal statistical society series methodological fawzi tabuada diggavi secure estimation control systems adversarial attacks ieee transactions automatic control vol mishra shoukry karamchandani diggavi tabuada secure state estimation sensor attacks presence noise ieee transactions control network systems vol fellouris bayraktar lai efficient byzantine sequential change detection ieee transactions information theory dembo zeitouni large deviations techniques applications springer science business media vol rubinstein kroese simulation monte carlo method john wiley sons key fundamentals statistical signal processing volume detection theory boyd vandenberghe convex optimization january cambridge university press draft
7
subgroups containing regular unipotent elements jun timothy burness donna testerman abstract let simple exceptional algebraic group adjoint type algebraically closed field characteristic let subgroup containing regular unipotent element theorem testerman contained connected subgroup type paper prove two exceptions contained subgroup exceptions arise extends earlier work seitz testerman established containment additional conditions embedding discuss applications main result study subgroup structure finite groups lie type introduction let simple algebraic group adjoint type algebraically closed field characteristic let subgroup let element order main theorem contained closed connected subgroup type unless belongs conjugacy class labelled view towards applications study subgroup structure finite groups lie type desirable seek natural extensions result particular conditions one embed full subgroup subgroup special case main theorem question positive answer classical contained proper parabolic subgroup sln theorem steinberg one see condition embedding necessary considering indecomposable representations arise restrictions indecomposable representations algebraic seitz testerman also provide positive answer simple exceptional algebraic group type large enough still assumption contained proper parabolic subgroup precisely approach requires general results embedding finite quasisimple subgroups exceptional algebraic groups established liebeck seitz instance sufficiently large theorem implies contained proper closed positive dimensional subgroup sufficiently large means natural seek extension theorem removing conditions embedding exceptional type seitz testerman study case semiregular unipotent group notice semiregular date march timothy burness donna testerman semisimple element one hope answer question proper reductive subgroup semiregular case reduction possible particularly interesting situation main result states contained connected subgroup type either paper extend results studying remaining case order assume regular means abelian unipotent group dimension rank equivalently contained unique borel subgroup well known regular unipotent elements exist characteristics form single conjugacy class since order smallest power greater height highest root see order formula hypothesis implies coxeter number recall dimr height highest root particular given aforementioned earlier work may assume given main result following paper subgroup closed connected subgroup isomorphic theorem let simple exceptional algebraic group adjoint type algebraically closed field characteristic let subgroup containing regular unipotent element exactly one following holds contained subgroup contained subgroup iii contained subgroup three cases uniquely determined remark let make comments statement theorem see uniqueness part suffices show every subgroup containing conjugate write subgroups proposition say finally applying theorem lang theorem deduce interesting examples arising iii found craven recent study maximal subgroups socle finite exceptional groups lie type action subgroup adjoint module lie described theorem see section construction explained section let say words construction let subgroup identify unipotent radical spin module take subgroup containing regular unipotent element consider semidirect product note uniquely determined conjugacy one checks composition factor dim follows complement moreover one show contains regular unipotent element unique subgroups hence uniquely determined show subgroup constructed way contained subgroup follows theorem similar construction given iii subgroups one show subgroup unique conjugacy contained subgroup conclusion theorem deduced proof lemma also follows kleidman classification maximal subgroups however completeness provide alternative proof following approach use exceptional groups finally let comment adjoint hypothesis statement theorem let simple exceptional algebraic group let gad corresponding adjoint group suppose subgroup containing regular unipotent element regularity implies thus subgroup gad containing regular unipotent element determined theorem next result shows subgroups part theorem sense serre contained proper parabolic subgroup proof given end section theorem connected reductive subgroup reductive algebraic group containing regular unipotent element girreducible view theorem partial analogue subgroups isomorphic simple exceptional groups theorem let simple exceptional algebraic group adjoint type algebraically closed field characteristic let regular unipotent element subgroup remark theorem let subgroup containing regular unipotent element combining theorems deduce contained subgroup contained proper parabolic subgroup particular examples arising parts iii theorem genuine exceptions containment next result follows combining theorem main results corollary let simple algebraic group adjoint type algebraically closed field characteristic let subgroup containing regular unipotent element addition classical assume either contained subgroup one cases parts iii theorem next present applications theorem let simple algebraic group theorem recall finite subgroup lie primitive contained proper closed subgroup positive dimension section guralnick malle determine maximal lie primitive subgroups containing regular unipotent element maximal closed positive dimensional subgroups containing regular unipotent element determined earlier work saxl seitz precisely give list possibilities claim cases actually occur particular proof relies thus arises possibility therefore combining theorems theorem obtain following refinement corollary let simple exceptional algebraic group adjoint type algebraically closed field characteristic suppose maximal lie primitive subgroup containing regular unipotent element let denote socle timothy burness donna testerman one following holds one following holds iii one following holds either one following holds remark corollary lie primitive subgroups containing regular unipotent element lower bound best possible case genuine example deduced recent work litterick however claiming possibilities listed corollary lie primitive contain regular unipotent elements indeed expect list reduced also use theorem shed new light subgroup structure finite exceptional groups lie type let simple exceptional algebraic group adjoint type prime let steinberg endomorphism fixed point subgroup almost simple group maximal subgroups ree groups automorphism groups determined conjugacy kleidman malle respectively similarly handled even odd therefore may assume one cases work many authors problem determining maximal subgroups essentially reduced case almost simple group lie type socle field characteristic see section references therein one main problems determine subgroup form maximal among positive dimensional closed subgroups significant restrictions rank size established problem obtaining complete classification still open special case particular interest integer aforementioned work liebeck seitz shows maximal connected subgroup type results direction recently obtained craven one using maximality proves almost every case approach unable eliminate certain values particular case contains regular unipotent element problematic existence subgroups much general setting established serre explains called serre embeddings using theorem one show maximal serre embeddings form also handle subgroups excluded particular follows part theorem subcase part iii conclusion holds theorem conclude introduction let briefly describe main steps proof theorem refer reader section details suppose regular unipotent element let subgroup containing maximal torus set lie without loss generality replacing suitable show may assume contains toral element corresponds diagonalizable element eigenvalues see lemma use known action determine eigenvectors eigenspaces severely restricts possibilities possible obtain restrictions indecomposable summands considering trace semisimple elements small order typically need work elements order way almost cases able reduce situation compatible action subgroup situation given table notation indecomposable summands table explained section observe socle simple summand eigenvector eigenvalue let without loss generality may assume action terms basis given matrix thus ker ker ker ker ker main goal show may assume obtained exponentiating regular nilpotent element respect fixed chevalley basis see section details allows explicitly identify maximal torus subgroup containing means compute eigenvectors eigenspaces terms chevalley basis aid magma simplify computations describe action terms dim dim matrix respect compute bases subspaces ker way obtain expressions terms undetermined coefficients derive relations coefficients considering action relations found using fact ker abelian subalgebra apart handful special cases allows reduce case complete argument showing stabilizer subgroup process elimination extension comprises bulk proof theorem see sections however handful possibilities require attention cases arising part theorem handled section cases action known one timothy burness donna testerman three possibilities stabilizes subalgebra allows reduce case contained proper parabolic subgroup let quotient map using identify may view subgroup may well assume minimal parabolic respect containing contained proper parabolic subgroup regular unipotent element contained subgroup follows combining theorem aforementioned earlier work seitz testerman classical groups inspecting determine action must compatible action given theorem way deduce possibilities completes proof theorem remark anticipate new methods introduced paper applicable generally indeed future work investigate subgroups exceptional algebraic groups form containing semiregular unipotent elements view removing conditions theorem notation notation fairly standard simple algebraic group write set roots positive roots simple roots respect fixed borel subgroup follow bourbaki labelling simple roots often denote root writing module group soc rad denote socle radical respectively write denote summands convenient write ank matrix block occurring multiplicity addition write standard upper triangular unipotent jordan block size acknowledgments testerman supported fonds national suisse recherche scientifique grant numbers burness thanks section centre interfacultaire bernoulli epfl generous hospitality pleasure thank david craven jacques many useful discussions also thank bob guralnick gunter malle iulian simion helpful comments earlier version paper preliminaries section record preliminary results needed proof theorem start recalling well known results modular representation theory simple groups main reference alperin representation theory let algebraically closed field characteristic let let hxi sylow subgroup exactly indecomposable say dim unique projective indecomposable element jordan form particular projective dim jordan form jpa precisely simple labelled dim particular every simple trivial module steinberg module easy see jordan form theorem steinberg restriction simple module corresponding algebraic group type see section refer highest weight respect maximal torus algebraic identify subgroups weights torus set integers often write highlight highest weight similarly precisely projective indecomposable labelled simple remainder reducible dim dim dim element jordan form jordan form remaining structure modules described alperin terms composition factors odd notation indicates soc rad convenient define green correspondence see section implies indecomposable projective zero indecomposable zero particular following lemma holds lemma let indecomposable write jordan form jpa main result structure indecomposable following theorem define subtuple tuple form denote writing theorem let reducible indecomposable exists integer subtuple soc even otherwise proof follows discussion section also see proposition corollary let indecomposable precisely two composition factors soc hence dim corollary let reducible indecomposable dim moreover least four composition factors dim similar description indecomposable modules indeed write indecomposable corresponds indecomposable module kernel timothy burness donna testerman traces let representatives unique conjugacy classes elements order respectively note semisimple since let let denote trace lemma mod mod mod proof straightforward calculation using fact identify symmetric power symi natural module composition factors since action diagonalizable therefore next two results immediate corollaries lemma use notation defined lemma mod mod mod mod lemma mod mod mod mod mod mod mod mod mod mod mod let simple algebraic group adjoint type let let positive integer define order recall adjoint module lie algebra lie acts via adjoint representation proposition let simple exceptional algebraic group adjoint type algebraically closed field characteristic let lie adjoint module recorded table proof follows inspecting dimensions centralizers elements order see tables using fact dim dim every semisimple element see section example subgroups table traces elements order adjoint module remark suppose contained adjoint write simply connected group type centre schur multiplier implies therefore every element order lifts element order particular order see table whence respect lie remark cases helpful know eigenvalue multiplicities elements order certain values relevant cases following straightforward obtain information aid magma using algorithm litterick see section thank litterick assistance computations subgroups let simple algebraic group recall good prime types type primes good type proposition let simple algebraic group adjoint type algebraically closed field good characteristic let element order subgroup containing regular subgroup unique proof part follows main theorem part exceptional follows theorem assume classical let subgroup containing let natural module theorem contained proper parabolic subgroup particular type acts irreducibly tensor indecomposably see proposition conjugacy statement follows representation theory finally let assume claim stabilizer result follows since unique unique justify claim first observe jordan form using lemma fact order particular acts reducibly complete argument applying lemma proposition let simple exceptional algebraic group adjoint type let regular unipotent element timothy burness donna testerman subgroup action adjoint module lie given table proof precise description tilting module given table may assume action deduced actions lie minimal module see table following write tilting module composition factors direct sum weyl modules highest weights terms notation get explained start section express direct sum indecomposable tilting modules form highest weight example suppose highest weight one summand uniserial module shape see lemma highest weight already accounted summand deduce thus lemma projective indecomposable dimension comparing socles follows thus recorded table cases entirely similar omit details remainder section let simple exceptional algebraic group adjoint type algebraically closed field characteristic let rank coxeter number respectively assume contains regular unipotent element order means need recall construction subgroups containing regular unipotent elements following treatment first need new notation let simple lie algebra type fix chevalley basis write convenient define let set abuse notation also write elements fix root familiar chevalley construction allows construct element exp gldim indeterminate passing obtain unipotent subgroup exp aut see proposition note subgroups table action lie proposition given lower bound make similar construction general elements let localization prime ideal proposition chevalley construction element produce exp gldim particular passing define exp aut use general construct certain subgroups group following order state main result proposition recall ordered triple elements chosen satisfy commutation relations standard generators lie algebra namely timothy burness donna testerman following result part iii use notation proposition suppose zfi following hold subgroups hue subgroup iii maximal torus hue map morphism algebraic groups action basis given moreover normalizes contains regular unipotent element proof follows combining lemmas lemma following proposition play important role proof theorem proposition suppose zfi let subalgebra generated let stabilizer subgroup proof let subgroup constructed proposition note contains regular unipotent element clearly stabilizes construction let maximal closed positive dimensional subgroup main theorem contained proper parabolic subgroup corollary also see weisfeiler implies reductive theorem either subgroup thus latter case since stabilize subspace let maximal closed positive dimensional subgroup reductive applying theorem conclude would like able use proposition identify stabilizers aim mind present proposition order state result need additional notation suppose proposition let torus constructed part iii proposition let highest root recall thatph familiar height function set weights write corresponding space similarly statement proof following result use notation proposition suppose zfi suppose clz chosen subgroups exists moreover stabilizer subalgebra generated subgroup proof first observe since take note since clz eigenvector since vector thus maximum since thus addition since commutation relations imply equal therefore thus conclude required final statement concerning stabilizer follows immediately proposition exponentiation section turn different notion exponentiation following seitz let simple exceptional algebraic group adjoint type algebraically closed field characteristic let denote rank coxeter number let unipotent radical fixed borel subgroup corresponding choice base root subgroup defined explained section may view lie algebraic group via hausdorff formula set lie start recalling proposition proposition suppose exists unique isomorphism algebraic groups lie whose tangent map identity lie suppose contains regular unipotent element order position use proposition study structure replacing suitable conjugate may assume exp proposition let subgroup containing let given maximal torus without loss generality may assume contained borel subgroup defined description action lie proof proposition follows acts lie diag cdr recorded table label form decreasing sequence proposition let regular unipotent element order let torus constructed proposition timothy burness donna testerman table integers exist connected unipotent subgroups hxi particular written commuting product form cdi proof first note since order let unipotent radical borel subgroup note moreover thus lie lie choose lie map proposition extend basis cdi construct corresponding connected unipotent subgroups recall abelian lie abelian subalgebra proof proposition implies contained therefore hxi closed connected unipotent subgroup moreover lie dim thus note connected since adjoint part follows since abelian finally part follows see proposition proposition let unipotent radical borel subgroup let proper subalgebra lie let stabilizer assume contains regular unipotent element order either contained proper parabolic subgroup contained subgroup proof since consider isomorphism lie let centre abelian subalgebra stabilized claim see let note commute since lie see proof proposition implies claim follows therefore positive dimensional subgroup containing regular unipotent element complete argument proceed proof proposition using boreltits corollary let assume contained proper parabolic subgroup maximal closed reductive positive dimensional subgroup subgroups main theorem either subgroup may assume latter situation suppose since lie minimal module follows possibility must contain semisimple element comparing ranks contradiction therefore proper subgroup thus maximal closed reductive subgroup application conclude contained subgroup methods section discuss proof theorem highlighting main steps ideas let simple exceptional algebraic group adjoint type defined algebraically closed field characteristic let rank let lie adjoint module suppose regular unipotent element coxeter number embedding corresponds abstract homomorphism kernel image let simple lie algebra type fix chevalley basis since view basis appropriate root spaces respect cartan subalgebra spanned convenient write set let proposition let corresponding subgroup constructed proposition maximal torus associated morphism replacing suitable may assume exp let morphism algebraic groups may assume isomorphism algebraic groups consider elements without loss generality may assume thus set lemma exists containing proof noted proposition thereqare scalars let consider general element view get integers appearing table since easy see one set desired conjugate finally suppose defining timothy burness donna testerman get implies element contradicts semisimplicity view lemma may assume contains corresponds diagonalizable element eigenvalues since use known action see proof proposition determine eigenvectors eigenspaces example collection eigenvalues given table set centre note contains borel subgroup proof theorem three main steps describe step elimination initial aim reduce situation action compatible decomposition given table almost cases able achieve goal consider possible decompositions direct sum indecomposable using description modules given section aim eliminating one possibility first use fact decomposition compatible jordan form read relevant tables addition must compatible known eigenvalues noted eigenvalues compute known action note indecomposable summand restriction hsi completely reducible need identify factors order compute eigenvalues summand often sufficient compare eigenvalues expected eigenvalues also use earlier calculations traces elements order obtain restrictions see section approach mind following lemma useful lemma let indecomposable form eigenvalues respectively proof first recall jordan form respectively fixed point simple module highest weight result clear case similarly soc eigenvalue finally suppose highest weight soc one eigenvalues determine second eigenvalue helpful view restriction tilting module ambient algebraic group type see lemma latter module fixed point weight high weight eigenvalue required let illustrate step carried specific case see example suppose jordan form table particular projective thus every indecomposable summand also projective terms notation introduced section possibilities subgroups follows summand eigenvalue contradicts must since eigenvalues see lemma follows see table summands immediately implies let involution since see lemma proposition follows whence mod thus possibility reduced case decomposition compatible see table step extension next observe decomposition given table socle simple summand complete argument aim show stabilizer subgroup almost every case exceptions two special cases appearing statement theorem let basis eigenvector eigenvalue may assume action given matrix respect basis define borel subgroup consider opposite borel subgroup also regular unipotent element order respect basis may assume acts via matrix conditions satisfied say standard basis aid magma construct dim dim matrix represent action respect chevalley basis moreover use proposition compute eigenvalues eigenvectors thus terms convenient write elements standard basis next identify basis ker terms edi see table since write timothy burness donna testerman similarly ker ker ker ker using magma straightforward compute bases relevant kernels computations done hand much quicker efficient use machine given bases say write goal determine scalars use specified actions derive relations coefficients relations determined exploiting fact regular unipotent elements example observe lie bracket since regularity implies abelian follows lie abelian subalgebra latter equality recall thus proceeding way goal reduce top case zfi moreover want find integers notation section indeed proposition implies stabilizer subgroup genericp situation described part theorem cases unable force zfi appealing proposition still show conclusion holds remaining cases action incompatible show stabilizes subalgebra precisely establish following result reduces proof theorem handful cases appearing table see remark conjugacy statement part theorem reduction theorem let simple exceptional algebraic group adjoint type algebraically closed field characteristic let subgroup containing regular unipotent element set lie chevalley basis one following holds contained subgroup uniquely determined stabilizes subalgebra one cases table prove reduction theorem sections considering possibility turn step parabolic analysis final step proof theorem concerns cases arising theorem given table view proposition may assume contained proper parabolic subgroup proceed studying possible embeddings subgroup take minimal parabolic let quotient map identifying may view subgroup show subgroups table exceptional cases theorem subgroup containing regular unipotent element namely use study composition factors turn imposes restrictions decomposition possibilities listed table way arrive two special cases statement theorem see section details notice adopt similar approach proof theorem example illustrate ideas let explain handle case recall example reduced situation compatible decomposition let soc let standard basis first consider indeed inspecting table see unique congruent modulo spanned sum simple root vectors must scalar similarly contained space ker take finally space ker note since follows highest root note since ker ker considering action see quickly deduce finally one checks condition yields setting easy see satisfy relations timothy burness donna testerman thus moreover set moreover working mod see proof proposition since thus stabilizes contained subgroup proposition completes proof theorem close section giving proof theorem proof theorem let lie adjoint module seeking contradiction suppose proper parabolic subgroup unipotent radical levi factor may well assume minimal respect containment particular quotient map identify contained proper parabolic subgroup regular unipotent element see lemma combining theorem main theorem deduce contained subgroup factors read information tables use determine factors indeed composition factor irreducible unipotent radical acts trivially factors decompositions compatible decomposition given table way reach contradiction see first observe least one trivial composition factor coming inspecting table immediately implies suppose table factors follows inspecting table using fact unique trivial composition factor deduce however cases see composition factor incompatible two possibilities eliminated similar fashion example composition factors inspecting table considering trivial composition factors deduce case find two factors contradiction mentioned proof theorem given sections carry steps group turn handle step section thus completing proof theorem case begin proof theorem handling case noted remark result case deduced proof lemma also follows kleidman classification maximal subgroups subgroups theorem let simple algebraic group type algebraically closed field characteristic let subgroup containing regular unipotent element contained subgroup proof noted section may assume see let lie adjoint module fix chevalley basis use notation introduced section particular borel subgroup eigenvalues let recall section may assume obtained exponentiating regular nilpotent element assume exp according table jordan form follows use notation projective indecomposable defined respectively case semisimple first assume semisimple recall jordan form view follows decomposition implies eigenvalue compatible assume section let standard basis summand action given matrix goal show zfi furthermore seek integers allow apply proposition find space ker ker ker gives scalars expressions specific coefficients depend characteristic coefficients presented set considering action deduce satisfy relations get thus see proof proposition finally applying proposition conclude contained subgroup next assume ker ker ker spanned vectors use notation therefore timothy burness donna testerman considering action deduce moreover implies arguing setting using proposition deduce contained subgroup suppose considering action deduce may well set one checks relations satisfied moreover take apply proposition follows stabilizer subgroup case rad complete proof theorem may assume rad suppose reducible indecomposable summand dim see corollary thus lemma implies jordan block size incompatible assume implies least three composition factors two would jordan form contradicts lemma follows jordan form dim considering theorem easy see possibility projective thus however implies involution trace see section incompatible proposition contradiction case rad finally let assume projective thus indecomposable summand also projective since eigenvalues deduce fact considering trace see option compatible decomposition respect subgroup containing regular unipotent element see table let summand socle let standard basis spaces ker ker whereas ker get set considering action deduce moreover implies deduce relations satisfied desired result follows applying proposition completes proof theorem subgroups reduction section goal establish theorem proof theorem case completed section main result following theorem let simple algebraic group type algebraically closed field characteristic let subgroup containing regular unipotent element set lie one following holds contained subgroup stabilizes subalgebra proof view may assume set standard notation particular eigenvalues jordan form see table may assume obtained exponentiating regular nilpotent element respect chevalley basis also useful note case semisimple implies none decompositions compatible eigenvalues given example given decomposition implies relevant eigenvalues contradicts assume let summand let standard basis section action given matrices respectively opposite borel subgroup spaces get expressions specific coefficients depend characteristic ones given set use action deduce moreover relations elements defined satisfied follows timothy burness donna testerman see proof proposition working mod applying proposition conclude contained subgroup suppose use notation similarly considering action deduce setting get one check relations satisfied particular set applying proposition deduce stabilizer subgroup case rad remainder may assume rad first assume arguing case proof theorem straightforward reduce case example suppose reducible indecomposable summand jordan form see implies least three composition factors use lemma see jordan form dim using theorem deduce option implies involution trace incompatible proposition assume suppose reducible indecomposable summand combining lemma theorem deduce jordan form unique summand summand simple however incompatible example theorem implies jordan form soc duality therefore may assume indecomposable summand either simple projective possibilities follows subgroups eigenvalues since eigenvalues respectively see lemma follows case ruled considering trace hence compatible containment subgroup see table need show contained subgroup repeat argument case details entirely similar case rad assume suppose reducible indecomposable summand easy check jordan form either unique summand jordan form theorem implies duality soc incompatible similarly case soc option implies trace contradicts proposition follows indecomposable summand either simple projective arguing case using fact eigenvalues deduce one check element order trace proposition implies therefore action compatible subgroup see table remains establish desired containment let standard basis summand decomposition usual manner deduce scalars may assume contained ker get since action given matrix deduce finally condition obtained considering action implies easy see relations satisfied complete argument usual manner via proposition case rad finally let assume projective thus indecomposable summand also projective since eigenvalues quickly deduce one following timothy burness donna testerman let element order diagonal matrix diag root unity decomposition compute eigenvalues compare results list eigenvalue multiplicities elements order noted remark latter computed using litterick algorithm example conjugate diagonal matrix one checks element order acts eigenvalues way deduce possibility let summand socle let standard basis spaces ker ker get finally one checks ker take usual manner considering action get addition condition yields following system equations equations imply setting use proposition show contained subgroup hand set one checks deduce hence easy check subalgebra gives case statement theorem completes proof theorem subgroups reduction following result prove section establishes theorem groups type theorem let simple adjoint algebraic group type algebraically closed field characteristic let subgroup containing regular unipotent element set lie one following holds contained subgroup one stabilizes subalgebra proof eigenvalues see section inspecting table see jordan form freely adopt notation introduced section case semisimple one decompositions compatible eigenvalues see may assume let summand let standard basis one checks spaces result quickly follows via proposition example setting considering action see deduce one checks see proof proposition applying proposition deduce contained subgroup assume ker ker get timothy burness donna testerman set action deduce take using proposition conclude contained subgroup case rad essentially repeat argument proof theorem see first paragraph case indeed easy reduce case compatible assume suppose reducible indecomposable summand applying lemma theorem deduce jordan form one following particular unique summand structure described theorem easy see existence summand contradicts instance suppose jordan form duality soc thus cases similar therefore may assume indecomposable summand either simple projective considering eigenvalues deduce find trace contradicts proposition hence possibility usual manner construct basis summand set consider action see deduce one checks condition gives result follows usual manner via proposition case rad subgroups first assume reducible indecomposable summand usual way combining lemma theorem deduce jordan form one following suppose jordan form applying theorem using deduce soc possibility incompatible eigenvalues let assume jordan rule form follows soc thus one following however clear none decompositions compatible remainder may assume indecomposable summand either simple projective considering eigenvalues deduce computing trace appealing proposition also remark follows possibility particular reduced case decomposition compatible containment subgroup see table let summand let standard basis reader check set considering action deduce condition yields thus addition relations satisfied set applying proposition conclude contained subgroup timothy burness donna testerman case rad implies projective indecomposable summand also projective view must case traces respectively need work harder eliminate decompositions let element order diagonal matrix diag root unity compute eigenvalues compare eigenvalue multiplicities elements order obtain using algorithm way deduce one following case compatible containment subgroup see table let summand socle let standard basis usual manner deduce scalars considering action together condition see either latter situation set check relations satisfied allows apply proposition conclude contained subgroup assume set one checks since preserves lie bracket yields conclude subalgebra part statement theorem case let summand socle let basis may assume actions given matrices subgroups respectively terms basis one checks ker whereas following spaces ker ker ker ker dimension respectively get set consider relations among obtained action basis also helpful note regular unipotent element abelian thus way deduce next one checks thus since preserves lie bracket yields similarly thus relation implies straightforward check subalgebra completes proof theorem reduction section establish following result proves theorem groups type theorem let simple adjoint algebraic group type algebraically closed field characteristic let subgroup containing regular unipotent element set lie one following holds contained subgroup timothy burness donna testerman one stabilizes subalgebra proof recall may assume see collection eigenvalues table jordan form follows note case semisimple eigenvalues incompatible may assume thus view let summand let standard basis usual manner straightforward show appropriate subalgebra use proposition show holds statement theorem example get set considering action deduce furthermore relation implies deduce see proof proposition apply proposition case rad combination lemma corollary implies jordan block size contradicts next assume usual way applying lemma theorem appealing reduce case indecomposable summand either simple projective considering eigenvalues follows involution trace contradicts proposition therefore entirely straightforward show summand appropriate result follows via proposition usual fashion subgroups similar argument applies reducible summand implies possibility soc however implies trace contradiction therefore indecomposable summands simple projective considering eigenvalues deduce rule computing trace complete argument previous case case rad difficult reduce case indecomposable summand either simple projective considering eigenvalues deduce computing trace see rule first possibility considering trace calculation also implies let summand fix standard basis considering spaces ker ker ker deduce setting using action deduce one check relations satisfied set timothy burness donna testerman use proposition deduce contained subgroup case rad finally let assume projective indecomposable summand also projective considering eigenvalues follows computing trace see one following cases trace compatible proposition element order eigenvalues one checks elements act way example see table possibility ruled one stabilizes subalgebra spanned vector indeed stabilizes soc spanned vector one checks hwi case statement theorem finally suppose compatible containment subgroup see table let summand socle let standard basis usual way obtain may assume considering action together condition deduce set subgroups directly apply proposition however minor modification argument proof proposition work first observe clz terms notation used proof proposition moreover set exp note final equality note higher degree terms zero since maximum calculating passing get therefore comparing deduce finally implies conclude proof proposition particular contained subgroup completes proof theorem reduction section complete proof reduction theorem see theorem main result following theorem let simple algebraic group type algebraically closed field characteristic let subgroup containing regular unipotent element set lie one following holds contained subgroup stabilizes subalgebra proof first note may assume see fact may assume since case handled section see examples recall timothy burness donna testerman collection eigenvalues note jordan form follows see table case semisimple considering eigenvalues deduce let summand let standard basis straightforward show appropriate result follows applying proposition note ker cause special difficulties assume ker ker get set consider action see deduce set see proof proposition applying proposition deduce contained subgroup case rad dimension indecomposable summand least implies jordan form block size contradiction next assume suppose reducible indecomposable summand dim see corollary view lemma deduce jordan form implies dim even least four composition factors thus subgroups corollary contradiction therefore may assume indecomposable summand either simple projective clearly possibility however implies trace contradicts proposition assume previous case applying lemma theorem appealing straightforward reduce case indecomposable summands either simple projective moreover considering eigenvalues deduce computing trace follows entirely straightforward show summand appropriate subalgebra result follows via proposition case rad previous case quickly reduce situation indecomposable summand simple projective case computing trace deduce case compatible containment subgroup see table usual let summand let standard basis get may set considering action basis deduce one checks relations satisfied thus set use proposition conclude contained subgroup case rad timothy burness donna testerman arguing usual manner straightforward reduce case indecomposable summand either simple projective considering eigenvalues deduce computing trace see option case compatible desired containment subgroup usual construct summand terms standard basis easy show appropriate conclude applying proposition case rad first assume reducible indecomposable summand usual way deduce jordan form one following unique summand jordan form either moreover odd number composition factors easy see incompatible similar reasoning rules cases finally suppose jordan form jordan form implies soc soc respectively however existence summand would mean eigenvalue case see therefore conclude every indecomposable summand either simple projective precisely view follows computing trace deduce case compatible containment subgroup let summand let standard basis usual manner deduce set considering action basis get finally one check condition implies desired result follows proposition case rad subgroups complete proof theorem may assume recall case handled earlier examples usual let first assume reducible indecomposable summand applying lemma theorem deduce jordan form one following possibility fact implies soc summand eigenvalue contradicting therefore reduced case indecomposable summand simple projective considering deduce claim case decomposition compatible containment subgroup one check eight decompositions compatible trace consider traces elements larger order let element order case straightforward compute eigenvalues using litterick algorithm compute eigenvalues every element order way deduce claimed let summand standard basis spaces ker ker ker respective dimensions gives considering action basis deduce condition yields equations equations imply set apply proposition show contained subgroup hand may assume straightforward check subalgebra puts case theorem completes proof theorem timothy burness donna testerman proof theorem final section complete proof theorem view theorem may assume type moreover work sections remains handle cases appearing table cases stabilizes subalgebra lie applying proposition assume contained proper parabolic subgroup unipotent radical levi factor following result combined theorem completes proof theorem recall craven constructed subgroup satisfying conditions parts iii theorem established uniqueness conjugacy see remark theorem let simple exceptional algebraic group adjoint type algebraically closed field characteristic let subgroup containing regular unipotent element let lie adjoint module contained proper parabolic subgroup either proof may assume minimal respect containing let quotient map identify arguing first paragraph proof theorem see end section deduce contained subgroup addition theorem implies contained subgroup must one cases table noted proof theorem composition factors read tables imposes restrictions composition factors considering possibility turn comparing composition factors table show cases labelled statement theorem compatible options first assume composition factors inspecting table easy see compatible levi subgroup similarly composition factors given thus eliminate case repeating argument proof theorem next suppose three possibilities follows three cases see two trivial composition factors table implies five factors incompatible three possibilities similarly many factors rule would get composition factors absurd finally suppose since weyl module composition factor see four factors thus option subgroups finally let assume three possibilities follows inspecting table counting number trivial composition factors quickly reduce small number possibilities considering composition factors straightforward reduce case example rule would many factors similarly would least three factors compatible three possibilities rule would imply factor finally suppose least three composition factors possibility references alperin local representation theory cambridge stud adv vol cambridge univ press borel tits unipotents paraboliques groupes invent math bosma cannon playoust magma algebra system user language symbolic comput bourbaki groupes lie chapitres hermann paris carter simple groups lie type john wiley sons london carter finite groups lie type conjugacy classes complex characters john wiley sons london cohen griess finite subgroups complex lie group type proc sympos pure math cooperstein maximal subgroups algebra craven maximal subgroups exceptional groups lie type preprint gorenstein lyons solomon classification finite simple groups number mathematical surveys monographs vol amer math guralnick malle rational rigidity compos math janusz indecomposable representations groups cyclic sylow subgroup trans amer math soc kleidman maximal subgroups chevalley groups odd ree groups automorphism groups algebra lawther jordan block sizes unipotent elements exceptional algebraic groups comm algebra lawther testerman subgroups exceptional algebraic groups mem amer math soc liebeck saxl testerman simple subgroups large rank groups lie type proc london math soc liebeck seitz subgroups generated root elements groups lie type annals math liebeck seitz subgroup structure exceptional groups lie type trans amer math soc liebeck seitz maximal subgroups positive dimension exceptional algebraic groups mem amer math soc liebeck testerman irreducible subgroups algebraic groups math timothy burness donna testerman litterick finite simple subgroups exceptional algebraic groups thesis imperial college london litterick finite subgroups exceptional algebraic groups mem amer math appear malle maximal subgroups algebra malle testerman linear algebraic groups finite groups lie type cambridge advanced studies mathematics vol cambridge university press proud saxl testerman subgroups type containing fixed unipotent element algebraic group algebra saxl seitz subgroups algebraic groups containing regular unipotent elements london math soc seitz unipotent elements tilting modules saturation invent math seitz testerman extending morphisms finite algebraic groups algebra seitz testerman subgroups type containing semiregular unipotent elements algebra serre exemples plongements des groupes dans des groupes lie simples invent math steinberg representations algebraic groups nagoya math steinberg endomorphisms linear algebraic groups mem amer math soc testerman construction maximal exceptional algebraic groups proc amer math soc testerman overgroups elements order semisimple algebraic groups associated finite groups algebra testerman zalesski irreducibility algebraic groups regular unipotent elements proc amer math soc weisfeiler one class unipotent subgroups semisimple algebraic groups preprint burness school mathematics university bristol bristol address testerman institute mathematics station polytechnique lausanne lausanne switzerland address
4
decompositions ideals sema aug bstract theory describes scalar multiples betti diagrams graded modules polynomial ring linear combination pure diagrams positive coefficients results describe decompositions explicitly paper focus betti diagrams ideals mainly characterize decomposition ideal describe using decompositions related ideals ntroduction recent theory addresses characterization betti diagrams graded modules polynomial rings originated pair conjectures boij whose proof given eisenbud schreyer result gives characterization betti diagrams graded modules scalar multiples information theory refer informative survey written theory brings idea decomposition betti diagrams graded modules whose resolutions pure resolution resolution pure decomposition consists one pure diagram positive coefficient expected much known behavior decomposition ideal polynomial ring characterization decompositions either coefficients chain degree sequences associated pure diagrams would also assist understanding interpreting structural consequences decomposition betti diagrams although theory quite recent lot open problems improvements contributions theory quite impressive cook berkesch erman kumini sam discuss theory perspective poset structures nagel sturgeon examine decomposition ideals raised combinatorial objects show combinatorial importance coefficients pure diagrams decompositions interest ideals results gibbons jeffries mayes rauciu stone white provide relation decomposition betti diagrams complete intersections degrees minimal generators another recent work done francisco mermin schweig paper study behavior coefficients borel ideals sake simplicity abbreviation used paper study behavior decompositions ideals obtain neat relation decompositions given lex ideal related lex ideals throughout paper main focus chain degree sequences decomposition also provide strong correlation coefficients pure diagrams well reason interested decomposition ideals based fact lex ideals particular betti diagrams prove ideals largest betti numbers among ideals hilbert function pivotal property ideals makes decompositions worthy study moreover formula gives nice formulation betti diagram lex ideals main goal obtain pattern decomposition lex ideal using decompositions related ideals follows let polynomial ring variables lexicographic order lex lex ideal ideal decomposed also ideal ideal first main result mathematics subject classification key words phrases decompositions betti diagrams ideals sema paper describes beginning decomposition terms decomposition algorithm decomposition provides chain degree sequences first degree sequence chain top degree sequence betti diagram algorithm second degree sequences top degree sequence remaining diagram subtraction first pure diagram suitable coefficient betti diagram continues betti diagram decomposed completely thus saying beginning decomposition mean several degree sequences pure diagrams obtained beginning decomposition state first result theorem let ideal codimension suppose write decomposition top degree sequences length linear combination pure diagrams greater decomposition form linear combination pure diagrams greater second main result article devoted pure diagrams say degree sequences decomposition betti diagrams polynomial ring like theorem notice similarities decompositions lex ideal reveal entire part decomposition containing pure diagrams length less shows precisely end decomposition pure diagrams length less exactly coefficients particular prove theorem let artinian ideal codimension suppose decomposed different gmin let ideal stable ideal codim ideal also codim artinian ideal top degree sequences length less coefficients linear combination pure diagrams associated degree sequences length decomposition chain degree sequences length exactly coefficients linear combination pure diagrams associated degree sequences length plan paper first discuss useful relations betti numbers ideals section also describe entire betti diagram lex ideal terms betti numbers colon ideal stable ideal section section give proof theorem gives relation beginning decompositions proof theorem given section decompositions ideals combining results theorems case give following diagram summarize nicely relation degree sequences decompositions ideals length length chain length degree sequences degree sequences degree sequences coming coming section see time decompositions may enough cover pure diagrams decomposition since might pure diagrams length may obtained ideal one naturally hopes obtain description entire decomposition ideal terms related ideals section includes observations possible way describe entire chain degree sequences decomposition lexicographic order lex lex makes think colon ideals case section one may expect similar results lex ideals indeed examples show relation decompositions lex ideal colon ideals allows give almost full description pure diagrams appearing decomposition background reliminaries throughout section assume graded polynomial ring variables field variable degree one case see description betti diagram terms betti numbers let graded minimal graded free resolution written numbers betti numbers considered betti diagram whose entry row column let sequence integers length graded free resolution called pure resolution type syzygy module generated elements degree words betti numbers zero except betti diagram module called pure diagram type formula pure diagram associated based equations introduced otherwise define partial order degree sequences dsi dti order degree sequences induces order pure diagrams decomposition betti diagram module linear combination pure diagrams positive coefficients algorithm decomposition algorithm algorithm decompose given betti diagram following steps determine top degree sequence betti diagram rmodule say sema determine coefficient pure diagram min subtract betti diagram new entries positive repeat first second steps remaining diagram betti diagram completely decomposed pure diagrams thus decomposition graded gives ordered decomposition betti diagram example instance let ideal decomposition given consider monomial ideal denote set minimal monomial generators denote subset containing minimal generators degree notation gmin used initial degree monomials gmax stand maximum degree monomials throughout paper next state definitions graded lexicographic monomial order ideal definition let xsnn xtnn two monomials either deg deg deg deg first index said glex graded lexicographic order definition let polynomial ring monomial ideal generated monomials ideal called ideal lexicographic ideal lex ideal monomial existence glex deg deg implies simplicity use lex order glex unless order different lexicographic order section make observations betti diagrams ideals aim get correlations betti numbers lex ideals next lemma shows colon ideal also ideal lemma let ideal consider colon ideals also ideal proof let monomial let monomial deg deg glex glex implies hence let monomial define largest index divides recall monomial ideal said stable every monomial also next quote proposition proposition formula let stable ideal proj dim max reg max deg decompositions ideals always assume unless otherwise stated follow lemma indicates relation minimal generators ideals next lemma provide crucial short exact sequence ideals lemma ideal unique monomial ideals moreover ideal also ideal since stable proof proof follows immediately fact ideal graded lex order lemma let graded free resolutions ideals short exact sequence moreover graded minimal free resolution proof form ideal implies short exact sequence mapping cone short exact sequence provides free resolution let implies either monomial ideal therefore divisible minimal generator therefore ideals common minimal generators tells cancellation mapping cone structure resulting graded free resolution minimal analyze betti numbers ideals know ideals stable addition lex ideal thus formula gives rise following decomposition say name initial degree gmin betti numbers following lemmas provide relations identities betti numbers help describe entire betti diagram respect betti numbers lemma gmin gmin gmin stability ideals lemma formula gives following identities betti numbers know thus follows sema lemma gmin gmax proof say gmax suppose gmin monomials degree divisible thus form gmax minimal generator degree therefore written two monomials deg divisible deg since degree monomials divisible minimal generator thus need show equality possible possible prove contradiction end suppose minimal generator since find least one minimal generator degree becomes minimal generator degree monomials degree divisible monomial contradicts minimal generator hence suppose betti diagrams gmax gmax gmax table betti diagrams therefore short exact sequence lemma together lemmas discuss section yield betti diagram following form gmax gmax gmax gmax gmax table betti diagram betti diagrams ideals overlap row diagram words betti numbers kth row may expressed terms betti numbers first row respectively oij ecompositions section prove theorem let chain length top degree sequences suppose chain first top degree sequences decomposition betti diagram exactly coefficients except possibly coefficient decompositions ideals recall given top degree sequence normalized pure diagram obtained lcm thus formula provides pure diagrams integer entries pure diagrams integer entries let top degree sequence betti diagram see table essentially follows fact betti diagrams may overlap row betti diagram degree shift due multiplication top degree sequence thus becomes first step fact could repeat process degree sequence suppose assume next degree sequence therefore steps decomposing would get remaining diagrams let next top degree sequence betti diagram paragraph shows becomes next top degree sequence betti diagram remaining diagram first steps decompositions look like following table remaining diagram steps similarly table remaining diagram step sema construction deduce algorithm exposes coefficient pure diagram min similarly rational number coefficient pure diagram min hence need look row betti diagram overlap thus need think top degree sequences length case let eliminated step decomposition algorithm words length whereas length shows length degree sequences decomposition hence decomposition pure diagrams length less recall focus degree sequences length three since length two need pay attention step decomposition besides table already shows top degree sequence remaining diagram therefore first top degree sequences decomposition coefficients case suppose eliminated step decomposition moreover assume vanish step chain degree sequences decomposition length length length entries row eliminated step decomposition easy observe remaining diagram table seen entries row therefore remaining diagram subtracting first pure diagrams decompositions ideals dti dti dti furthermore similar relations coefficients decomposition first steps coefficients pure diagrams decomposition dsi min dsi similarly corresponding coefficient pure diagram decomposition becomes dsi min dsi min assume entries corresponding dsi eliminated thus follows hence however equality may true coefficients since eliminated next step hence remaining diagram bring back case last top degree sequence length decomposition remaining diagram clearly shows shows degree sequence decomposition next step summary chain top degree sequences length coefficients sema becomes first top degree sequences length hence shown beginnings chain degree sequences decompositions identical believe analogous result remark let ideal also lexsegment ideal turns stable ideal codim suppose minimal free resolutions respectively get short exact sequence lemma mapping cone following minimal free resolution yields gmin gmin using properties case conclude betti diagrams either overlap gmin row betti diagram overlap identify gmin therefore betti diagram gmax gmax gmax gmax gmax gmax gmax gmax table betti diagram henceforth proof theorem easily modified polynomial ring variables corollary let ideal pure diagrams length decomposition chain pure diagrams appears beginning decomposition decompositions ideals oij ecomposition theorem showed pure diagram appears summand decomposition show summand decomposition coefficient possible except last one section consider end decomposition show degree sequences length less decomposition occurs precisely degree sequences length less decomposition prove claim artinian ideals except ones form different gmin main idea proof induction whose base step also requires tedious case analyzing decompositions finally schemes case analyzing help demonstrate degree sequences length less coincide entirely coefficients furthermore conjecture statement theorem also true whereas proof situation requires case analyzing becomes infeasible decomposing betti diagram first observe pure diagrams length less decomposition betti diagram show suffices check remaining diagrams several steps decomposition algorithm also notice row betti diagram form say gmax gmax gmax assume gmin betti diagram lex ideal first degree sequence min notice artinian lex ideal property yields thus gmax decomposition becomes therefore remaining diagram first step becomes sema next pure diagram coefficient min case let implied following inequalities thus algorithm eliminates entry remaining diagram next coefficient pure diagram becomes min creates two possible observe remaining diagrams case result obtain case decompositions ideals case case let algorithm gives next top degree sequence coefficient min therefore splits two case next degree sequence coefficient min implies possible thus coefficient must remaining diagram looks like sema algorithm case exist maximum degree suppose case note gmax contradicts assumption gmax remaining diagram top degree sequence remaining diagrams coefficient min next observe possible case result get remaining diagram case would like continue decompose one step coefficient pure diagram comes min remaining diagram becomes decompositions ideals forced caused remaining diagram becomes case following inequalities thus remaining diagram case sema notice pattern remaining diagram similar one beginning case possible top degree sequences case case replaced next steps case case iii gmax formula get artinian lex segment ideal let requires contradiction thus case exits maximum degree suppose gmax thus since remaining diagram subtracting two pure diagrams corresponding coefficients next top degree sequence coefficient min case let decompositions ideals thus min next observe one step decomposition get following two possible cases remaining diagram turns case case remaining diagram form case let case let hence could pause decomposing process since observed enough part decomposition betti diagram compare possible remaining diagram ones obtained decompositon betti diagram examine decomposition lex ideal first trivial case notice gmin statement vacuously true since sema next induct difference initial degrees gmin gmin base step step show statement true lex ideals gmin gmin gmin gmin since lex ideal end modify betti diagram table particular lex ideal respectively becomes next degree sequence coefficient min analyze possible cases next step decomposition remaining diagram three steps becomes obviously first top degree sequence coefficient case since remaining diagrams matches one case hence done decompositions remaining diagram otherwise decomposition results either case case iii keep decomposing betti diagram next degree sequence coefficient min implies possible therefore remaining diagram case means decompositions ideals also case corresponds case move next step min splits three cases case case case might need recall case requires case contradicts first assumption case case similar case continuing algorithm leads remaining diagram case thus either case case holds decomposition always end remaining diagrams even ones size decomposition turns case iii get hand assumption former latter inequalities imply respectively contradiction thus situation come true decomposition follows case decomposition follows different path next degree sequence decomposition becomes coefficient min move next possibility coefficient thanks inequality sema algorithm follows case iii decomposition relation yields remaining diagram hence betti diagram decomposed case keep decomposing betti diagram get three next coefficient min equal since relations also required case decomposition relations remaining diagram case required relations consequence following inequalities case decompositions ideals remaining diagram becomes diagram matches remaining diagrams case decomposition decomposes case iii suppose decomposition several steps ends case want show remaining diagram occurs well gmax may another case gives similar pattern like diagram thus assume maximum degree therefore next coefficient pure diagram remaining diagram turns also assumed betti diagram decomposes case notice entries remaining diagram closely related one case thus next coefficient min follows paths decomposition case decompositions always come remaining diagrams several steps decomposition moreover observe share length two pure diagrams coefficients also length three pure diagrams hence every possible case end remaining diagrams words decompositions coincide precisely several steps algorithm thus statement holds case gmin gmin induction hypothesis let statement true lex ideals gmin sema need show also true lex ideals satisfying gmin gmin end identify initial degrees gmin gmin suppose lex ideal prove two cases case since lex ideal write notice gmin otherwise contradicts thus gmin define ideal containing monomials degree greater equal note also lex ideal gmin define lex ideal gmin gmin therefore induction hypothesis ends decompositions pure diagrams length less coefficients length degree sequences length degree sequences easy see hand decomposed gmin clearly gmin gmin induction hypothesis decompositions ends recall get gmin gmin suppose gmin gmin thanks induction hypothesis decompositions ends length pure diagrams length pure diagrams also using theorem decompositions ideals observed remaining diagram remaining diagram shows ends also know ends hence statement true remains study gmin gmin means gmin follows gmin gmin gmin strict inequality applying process done applied prove statement equality end situation gmin gmin gmin gmin repeat get form lex ideal one check decomposition ideal decompositions ideals end decomposition therefore statement true ideal may assume without loss generality form gmin gmin observation completes proof case case let write gmin implies consider clearly statement trivially true ideal moreover gmin gmin base case decompositions ends hence length pure diagrams length pure diagrams length pure diagrams similar case consider lex ideal gmin thus result case statement true exactly trick case show ends follows statement holds case already shown decomposition satisfy statement nevertheless general stable ideal already assumed form statement conjecture statement theorem holds artinian theorem shows ends decompositions exactly artinian lex ideals except ones form different gmin hand based computations done using boijsoederberg packages computer algebra software see strongly believe result also true lex ideals particular form urther bservations xamples artinian lex ideal codimension shown summands length pure diagrams decomposition summands pure diagrams length less decomposition appear decomposition sema ideal beginning end respectively length degree length degree extra length coming sequences coming degree sequences might also pure diagrams length ones coming decomposition however middle part containing pure diagrams length comes quite clear one might ask whether ideals help describe middle part fact examples show quite strong relation well nevertheless results obtained sections observations discuss section provide close approximation chain degree sequences either observation section enough cover entire middle part decomposition decompositions may give redundant degree sequences section illustrate elation decompositions ideals via examples example let xyz lex segment ideal lex segment ideal stable lex segment similarly ideals lex segment ideals construct similar short exact sequences like lemma ideals unlike case might cancellations mapping cone short exact sequences ideals means cancellations betti diagram since mapping cone structure may yield minimal free resolution situation causes different degree sequences appear decomposition first notice find decomposition pure diags length consider short exact sequence ideals mapping cone short exact sequence ideals ends one cancellation first degree interpret ignoring one pure diagram decompositions ideals beginning one corresponding degree sequence beginning decomposition therefore pure diags length pure diagrams length less coming ideal length pure diags hence claim summands coefficients decomposition coefficients indeed decomposition impressive point example able describe entire chain degree sequences colon ideals ideal example example shows different situations might occur previous example let xyz ideal observe one cancellation occurs mapping cone process ideal decompositions pure diags length pure diags length pure diags length pure diags length decomposition ideal likely thus seems almost obtain actual decomposition sema apparently decomposition provides additional pure diagram appear decomposition nevertheless still supports idea covering middle part decomposition using decompositions example previous example saw approximation gives longer chain degree sequences actual chain degree sequences via decomposition ideals consider ideal colon ideals mapping cone ideal requires two cancellations ignore first two degree sequences pure diags length pure diags length pure diags length pure diags length get following chain degree sequences order set approximate decomposition however know degree sequences decomposition must partial ordered chain ones violate partial order needed eliminated decomposition get first degree sequence ignore sequences get approximate decomposition decomposition degree sequence associated coming decomposition show decomposition similar situation example moreover lex ideal realize another different situation degree sequence shows chain degree sequences appear decompositions explanation extra degree sequence might possible example see last degree sequence coming next degree sequence assume degree sequence implies decompositions ideals simultaneous elimination entries positions betti diagram algorithm decomposition however possible otherwise would pure diagram length decomposition hence partial order must examples show decompositions may enough provide entire chain degree sequences decomposition therefore possible gaps redundant degree sequences approximation chain degree sequences view explanations cancellations mapping cone necessity order chain degree sequences able provide entire chain degree sequences decomposition problem true decomposition lex ideal described decompositions colon ideals precisely terms pure diagrams coefficients relation decompositions artinian ideal lex ideals pointed theorems furthermore examples observed section show know decompositions colon ideals almost entire chain degree sequences ideal may revealed words try formalize full chain degree sequences decomposition ideal using chains degree sequences colon ideals lex ideal studying observations indicate direction research decomposition ideals natural work aims describe coefficients lex ideal terms coefficients colon ideals larger lex ideal may arise point narrow attention degree sequences pure diagrams although results involve coefficients well foresight regarding relation coefficients decompositions based observations mentioned section acknowledgment author would like thank adviser uwe nagel proposing problem decompositions ideals contributions discussions course preparing manuscript would also like thank daniel erman introducing theory summer graduate workshop msri eferences christine berkesch daniel erman manoj kummini steven sam poset structures theory int math res imrn anna maria bigatti upper bounds betti numbers given hilbert function comm algebra mats boij jonas graded betti numbers modules multiplicity conjecture lond math soc mats boij jonas betti numbers graded modules multiplicity conjecture case algebra number theory david cook structure posets proc amer math david eisenbud schreyer betti numbers graded modules cohomology vector bundles amer math shalom eliahou michel kervaire minimal resolutions monomial ideals algebra gunnar theory introduction survey progress commutative algebra pages gruyter berlin chirstopher francisco jeffrey mermin jay scheweig veronese decompositions https courtney gibbons jack jeffries sarah mayes claudiu raicu branden stone bryan white decompositions betti diagrams complete intersections daniel grayson michael stillman software system research algebraic geometry http sema herzog betti numbers finite pure linear resolutions comm algebra heather hulett maximum betti numbers homogeneous ideals given hilbert function comm algebra uwe nagel stephen sturgeon combinatorial interpretations decompositions algebra keith pardue deformation classes graded modules maximal betti numbers illinois address gunturku epartment athematics niversity usa bor ichigan ast hurch treet
0
offline signature authenticity verification unambiguously connected skeleton segments nov jugurta luiz miranda canuto method offline signature verification presented paper based segmentation signature skeleton standard image skeletonization unambiguous sequences points unambiguously connected skeleton segments corresponding vectorial representations signature portions segments assumed fundamental carriers useful information authenticity verification compactly encoded sets scalars sampled coordinates length measure thus signature authenticity inferred euclidean distance based comparisons pairs compact representations average performance method evaluated experiments offline versions signatures database comparison purposes three approaches applied set signatures namely straightforward approach based dynamic time warping distances segments published method also based dtw average human performance equivalent experimental protocol results suggest human performance taken goal automatic verification discard signature shape details approach goal moreover best result close human performance obtained simplest strategy equal weights given segment shape length index signatures skeletonization mean opinion score mos ntroduction andwritten signature form personal identification widely accepted socially legally used centuries authenticate documents bank checks letters contracts many require proof authorship signing person may provide unique information regarding way converts gesture intentions spontaneous hand movement writing speed traversed path pen tilt pressure applied data articulated result static figure signed documents signature analysis divided two categories offline online offline mode either signatures available traditional wet ink method paper documents available scanned form optical devices scanners digital cameras cases available data corresponds static signature images type approach efficient verifying signatures due fact relevant dynamic information discarded online mode person uses digitizing device digitizing tablets touchsceen devices directly record signals hand movement provides much information static image digitizing device typically record several complementary signals path travelled pen tip well instantaneous speed applied pressure pen tilt approach one dominates research signature verification due worldwide spreading affordable acquisition devices however offline approach still attractive aspects instance even today many contracts credit card authorization performed traditional signatures paper indeed although online signature verification higher reliability many practical situations economical practical reasons wet ink signatures yet useful biometric signals even unlikely scenario complete substitution wet ink signatures electronically acquired ones least task signature verification ancient ink paper documents remain relevant topic due large amount old signed documents whose authenticity potentially waiting verified give fundamental definitions jargon assume signature verification process determines whether tested signature produced target individual least one genuine signature available chosen criteria tested signature similar genuine references similarity threshold labelled true genuine signature otherwise signature labelled false forgery moreover coetzer classify forgeries random forgery forger know author name neither original signature thus false signature completely random simple forgery forger knows author name access original signature skilled forgery forger access samples genuine signatures also knows name author also divided two classes amateur professional professional skilled forgery produced person professional expertise handwriting analysis able produce higher quality forgery amateur general offline signature verification process divided four steps acquisition preprocessing feature extraction comparison preprocessing step image quality improved pixels transformed reduce computational burden subsequent steps examples techniques applied step thinning color conversion noise reduction smoothing morphological operations resizing instance shah cropped images exclude redundant white regions feature extraction step works propose innovations according batista ideal feature extraction technique extracts minimal feature set maximizes interpersonal variability amongst signature samples various subjects whereas minimizes intrapersonal variability amongst samples belonging subject lee pan divide features three classes global features local features geometrical features typical features extracted offline signatures marginal projections shanker rajagopalan extracts vertical projection bitmaps corresponding signatures thus yielding profiles compared dynamic time warping dtw likewise coetzer pushes bit idea using many marginal projections signature different angles call discrete radon transform whose behaviour modelled hidden markov model nguyen also use similar projections indeed use two techniques global features extraction first derived total energy writer uses create signature whereas second technique employs information vertical horizontal projections signature focusing proportion distance key strokes image signature although marginal projections commonly used literature straightforward approaches feature extraction may also rely upon image skeletonization typically skeletonization used filter foreground pixels bitmaps also used map offline signatures sets points similar online representations appealing online verification techniques may deployed use dtw compare segments points different signatures indeed straightforward approach corresponds baseline method implemented paper explained section iii features available signature authenticity verification performed simulate actual verification academic works randomly select small number genuine signature samples user typically play role set enrolled signatures samples remaining dataset false genuine signatures randomly taken simulate verification attempts test samples compared enrolled samples decision made genuine signature rejected called false rejection error contrast forgery accepted called false acceptance error experimental protocol false acceptance rate far false rejection rate frr computation used work explained section furthermore paper propose method inspired online approaches compact codification segments skeletonized offline signatures explained section skeletonized segments basic aspect work detailed section moreover segmentation used work induces straightforward method similar classical online verification strategies dtw experiments take account biometrics comparing performances measures using different databases misleading consider important issue therefore allow direct comparison database experimental protocol compared methods use online signature database online sample converted offline bitmap representation explained section comparative results provided along experimental setup worth noting human baseline performance presented dataset spirit works finally section conclude discussing results usefulness mean opinion scores mos potential goal automatic verification performances nambiguously connected skeleton segments raw online signature signals frequently represented two vectors samples sequence regularly sampled horizontal positions xon another sequence corresponding vertical positions yon stands sample counter time compared offline representation signature verification signals xon yon significantly better although know velocity information may completely recovered offline representations address offline signature verification problem first recovering horizontal vertical signals may regarded xon yon done standard skeletonization described unlike true online representations skeletons signature sets unordered points instance figure offline signature skeletons regarded sets unordered pixels bitmap even though subsets pixels form segments clearly created unambiguous sequential hand gesture comparisons online signatures straightforward points xon yon ordered time analogously comparison two offline signatures may also done comparison sequences points xof yof representing black pixel coordinates skeletons however ambiguities concerning ordering points turns task combinatorial optimization problem whose computational cost may prohibitive significantly reduce cost methods proposed paper decompose offline signatures skeletons unambiguously connected skeleton segments ucss illustrated figure define ucss offline signature skeleton regarded connected graph vertices points skeleton edges bidirectional connection neighbouring points neighbourhood also consider degree vertex number neighbouring vertices connected therefore ucss sequences directly connected vertices found two vertices degrees greater internal segments vertex vertex degree greater extremities two vertices isolated lines fig inside circles one finds points skeleton delimits ucss full skeleton set extracted features ucss assume segment portion signature points unambiguously ordered apart single ambiguity overall direction pen movement one know end ucss movement pen begins thus signature sample signer represented set ucss sequence pairs coordinate points moreover take account single ambiguity overall direction pen movement ucss represented twice first sequence pairs given order reversed order iii baseline ethod two methods automatic offline signature verification proposed paper first method considered baseline straightforward application dynamic time warping compute distances ucss method standard dtw method itakura restrictions applied systematically compare every segment every segments given bag reverse segments extracted reference signatures consider instance test signature est est ucss bag segments ucss segments references merged single set ucss test signature compared ucss minimum distance taken words ucss test signature associated single ucss yields minimum dtw distance precisely est est min dtw est average distance sets est given est est est est dtw est stands dynamic time warping distance itakura restriction est reversed version depending one yields lowest distance moreover est test pointer signature test set moreover est length number points ucss tested signature work randomly take genuine signatures individual reference set denoted therefore union references namely resulting cardinality provide better score tested signatures also define partial bags segments segments reference signature excluded result able compute average distance reference signature corresponding remaining bag segments follows finally total distance tested signature est genuine set references summarised bag segments defined est plays role normalization score important drawback baseline method computation est dtw distances order obtain single tested signature roughly times average number segments ucss genuine signature consequence method high computational burden roposed method ucss subsampling significantly alleviate computational load baseline method encode ucss vector roughly represents shape position encode ucss plus scalar corresponding ucss length length given terms number points ucss illustrated figure ucss encoding strategy main aspect proposed method assume almost ucss short enough prevent strong warping therefore one may get rid high dtw computation cost replacing ucss subsampled points words assumed ucss comparisons dtw almost equivalent much faster euclidean distance computation corresponding vectors ucss length taken account plain dtw used indeed ucss regarded composition two sampled signals say lucss proposed coding scheme takes equally spaced subsamples thus yielding vector practical purposes assume dtw oversampled set points numerically rounded integer values iii also dilated segments approximately four pixels wide three steps enough convert entire online data offline signatures indeed resulting images used mean opinion score experiment detailed subsection experiments another sequence steps still taken convert offline signatures ones namely standard skeletonization method described applied dilated signature image resulting skeletons sets points signature sample centered origin whereas variance vertical horizontal directions scaled one fig proposed method illustration vectors lengths two segments segments segment ucss coded scalars last scalar represents segment length accumulated squared euclidean distance corresponding points two compared ucss highlight definition function dtw equation also defined minimum two distances reversed version considered obtain therefore comparison two signatures significantly simplified use following distance compared equation est min est set segments reference signatures analogously every subset replaced final given signature computed equation xperimental esults work use online signatures database error rates online verification task abundantly found literature database subcorpus mcyt database acquired different writers writer provided genuine signatures whereas different volunteers provided skilled forgeries per signature signatures acquired wacom intuos usb tablet constant sampling rate obtain offline signatures need experiments first convert mcyt online sample image horizontal xon vertical yon pen tip positions time considered follows points online signature interpolated using splines allow oversampling otherwise sparse representation due relatively low sampling rate samples per second fig online signature oversampled online signature dilated image approximately four pixels wide lines skeleton signature resulting versions signatures represented set coordinate pairs ucss extracted noteworthy order pairs presented longer stands discrete time counter true online representation represents mere skeleton point counter whose correspondence time ordering unknown experiments simulate actual biometric system randomly choose genuine signatures database form enrolment card afterwards signatures user randomly sampled compared enrolment card given decision threshold proportion true signatures whose costs threshold thus wrongly rejected genuine estimated frr whereas proportion false signatures whose distances threshold estimated far also compared computational burden two proposed methods process compare one enrolment card genuine samples per signature compute scores remaining signature samples per reference processing time baseline method thousands times greater proposed method ucss subsampling surprisingly lighter method yielded significantly better performance presented figure interesting methods two ucss associated according either equation equation instead comparing shapes one may compare lengths thus yielding new score corresponding absolute difference associated lengths regarded third method referred length based one figure table table esults method independent runs ach run corresponds random partition reference test signatures genuine ones method method based shape based proposed mean scores iii eer std dev mean opinion score extraction protocol fig roc curves tested methods set experiments use randomly chosen reference signatures simulate enrolment pool remaining genuine signatures along false ones test simulated system black dot also indicates mos performance explained subsection subjective decision threshold handled yield roc provide wider comparison scenario proposed methods based ucss also included comparison experiments method published extracts projections bitmaps corresponding signatures compare modified dtw called stability measures included improve performances reproduced best implementation method explained paper applied offline signatures used test methods experimental protocol highlight although method implemented revised strictly follow instructions apply database signatures illustrated figure moreover instead reference genuine signatures use ones assure protocol compared methods second round experiments repeated times method following protocol genuine signatures randomly chosen references enrolment whereas set remaining genuine signatures plus randomly chosen false ones used test simulated system independent trial adjusted threshold decision obtain operational point far equals frr equal error rate eer table presents average results per method terms eer standard deviation independent trials sets experiments scores methods iii also fused simple arithmetic mean yielding improved performance shown figure table quantify human performance task also prepared cards one corresponding genuine source signatures genuine signer mcyt database cards contain five genuine signatures left side ten signatures randomly chosen right side signatures right side genuines example cards seen figure cards presented different students lecturers university willing volunteers volunteers carefully instructed study genuine signatures presented left part page label signatures right part writing boxes next signature true genuine false priori provided highlight volunteers know proportions true false signatures card fig reference panel five randomly chosen genuine signatures panel five randomly chosen forgeries five randomly chosen genuine signatures total cards filled volunteers comparing provided labels true hidden labels estimated mean opinion score mos obtained consolidated false rejection rate false acceptance rate dot figure allows visual comparison rates far frr mos decision threshold known handled roc curves automatic methods range possible decision thresholds onclusion brief work inspired evident superiority online signature verification methods compared offline ones therefore methods proposed based skeletonization possibly straightforward method obtain signature representations images baseline method unambiguously connected sequences points skeletons conception unambiguously connected skeleton segment ucss whose formal definition given section plays pivotal role assuming ucss shapes position relevant information biometric verification expect systematic ucss dtw would yield best performance high computational cost however noticed instead alternative method initially proposed alleviate high computational burden baseline method considering points per ucss yielded significantly better performance compared baseline method moreover even simple byproduct method based comparison ucss lengths performed better baseline method indeed baseline method outperformed method uses improved dtw relies upon marginal projections signatures instead segments like ucss proposed work results conjecture average ucss good segmentation option ucss shape details relevant information carriers biometric verification purpose indeed given comparative performances even conclude ucss shape details disturbing noises biometric verification task point interesting matter future works regarding method based ucss length noteworthy use either dtw euclidean distance match ucss necessary step words behind apparent simplicity method one aware matching ucss simple step also rises interesting questions instance superiority joint approach ucss shape length combined may connection lost signal pen tip velocity turn main signal biometric verification indeed ucss shape straightness length expected somehow dependent pen tip velocity either power law isochrony dependency also attractive subject works letter conjecture fusion length shape based scores somehow related inferred velocity signal given signature image may explain relatively good performance clearly even best performance presented far performances typically found literature online verification database original online form contrast work assume offline verification task best possible performances far crowd willing attentive humans therefore choose use rate basis comparison indicates best performances obtained indeed close crowd humans nonetheless experiments using actual offline signature databases competing methods intended done sequel work acknowledgment work supported grant conselho nacional desenvolvimento cnpq eferences abdullah omar offline signature verification system pattern analysis intelligent robotics icpair international conference vol ieee batista rivard sabourin granger maupin state art signature verification pattern recognition technologies applications recent advances canuto dorizzi matos infinite clipping handwritten signatures pattern recogn vol coetzer herbst preez offline signature verification using discrete radon transform hidden markov model eurasip journal advances signal processing vol coetzer herbst preez signature verification comparison human machine performance tenth international workshop frontiers handwriting recognition suvisoft ferrer vargas morales robustness offline signature verification based gray level features ieee transactions information forensics security vol gonzalez woods image processing digital image processing vol hafemann sabourin oliveira offline handwritten signature review arxiv preprint itakura minimum prediction residual principle applied speech recognition ieee transactions acoustics speech signal processing vol lee pan offline tracing representation signatures systems man cybernetics ieee transactions vol morocho morales fierrez tolosana signature recognition establishing human baseline performance via crowdsourcing international conference biometrics forensics iwbf ieee nel preez herbst estimating pen trajectories static scripts using hidden markov models document analysis recognition proceedings eighth international conference ieee nguyen blumenstein leedham global features signature verification problem international conference document analysis recognition ieee simon gonzalez faundezzanuy espinosa satue hernaez igarza vivaracho mcyt baseline corpus bimodal biometric database iee proceedingsvision image signal processing vol plamondon srihari online handwriting recognition comprehensive survey pattern analysis machine intelligence ieee transactions vol qiao nishiara yasuhara framework toward restoration writing order handwriting image ieee transactions pattern analysis machine intelligence vol shah khan subhan fayaz shah offline signature verification technique using pixels intensity levels international journal signal processing image processing pattern recognition vol shanker rajagopalan signature verification using dtw pattern recognition letters vol viviani flash power law isochrony converging approaches movement journal experimental psychology human perception performance vol
1
journal latex class files vol august difficulty adjustable scalable constrained test problem toolkit sep zhun fan senior member ieee wenji xinye cai hui caimin wei qingfu zhang fellow ieee kalyanmoy deb fellow ieee erik goodman evolutionary algorithms moeas achieved great progress recent decades designed solve unconstrained optimization problems fact many problems usually contain number constraints promote research constrained optimization first propose three primary types difficulty reflect challenges optimization problems characterize constraint functions cmops including convergencehardness develop general toolkit construct difficulty adjustable scalable constrained optimization problems cmops three types parameterized constraint functions according proposed three primary types difficulty fact combination three primary constraint functions different parameters lead construct large variety cmops whose difficulty uniquely defined triplet parameter specifying level primary difficulty type respectively furthermore number objectives toolkit able scale two based toolkit suggest nine difficulty adjustable scalable cmops named evaluate proposed test problems two popular cmoeas adopted test performances different difficulty triplets experiment results demonstrate none solve problems efficiently stimulate develop new constrained moeas solve suggested index problems optimization test problems controlled difficulties ntroduction ractical optimization problems usually involve simultaneous optimization multiple conflicting objectives many constraints without loss generality constrained optimization problems cmops defined follows minimize subject mdimensional objective vector defines inequality constraints defines equality constraints greater three usually call constrained optimization problem cmaop solution said feasible meets time two feasible solutions solution said dominate least one denoted feasible solution feasible solution dominating said feasible solution set feasible solutions called pareto set mapping objective space results set objective vectors denoted pareto front cmops one objective need optimized simultaneously subject constraints generally speaking cmops much difficult solve unconstrained counterparts unconstrained optimization problems mops constrained evolutionary algorithms cmoeas particularly designed solve cmops capability balancing search feasible infeasible regions search space fact two basic issues need considered carefully designing cmoea one balance feasible solutions infeasible solutions balance convergence diversity cmoea address former issue constraint handling mechanisms need carefully designed researchers existing constraint handling methods broadly classified five different types including feasibility maintenance use penalty functions separation constraint violation objective values constraint handling hybrid methods feasibility maintenance methods usually adopt special encoding decoding techniques guarantee newly generated solution feasible penalty method one popular approaches overall constraints violation added objective predefined penalty factor indicates preference constraints objectives penalty method includes static penalties dynamic penalties death penalty functions penalty functions adaptive penalty functions penalty functions etc methods using separation constraint violation objective values constraint functions objective functions treated separately variants type include stochastic ranking constraint dominance principle cdp methods constraint handling method constraint functions transformed one extra objective function representative methods type include infeasibility driven evolutionary algorithm idea journal latex class files vol august comoga cai wang method etc hybrid methods constraint handling usually adopt several methods representative methods include adaptive model atm ensemble constraint handling methods echm address second issue selection methods need designed balance performance convergence diversity moeas present moeas generally classified three categories based selection strategies indicator based methods ibea hype group based methods set first level solutions selected improve performance convergence crowding distance adopted maintain performance diversity methods performance convergence maintained minimizing aggregation functions performance diversity obtained setting weight vectors uniformly indicator based methods hype performance convergence diversity achieved using hypervolume metric cmop includes objectives constraints number features already identified define difficulty objectives include geometry linear convex concave degenerate disconnected mixed search space biased unbiased unimodal objectives dimensionality variable space objective space first one geometry geometry mop linear convex concave degenerate disconnected mixed representative mops reflecting type difficulty include zdt dtlz second one biased unbiased search space representative mops category include third one modality objectives objectives mop either objectives multimodal multiple local optimal solutions increase likelihood algorithm trapped local optima high dimensionality variable space objective space also critical features define difficulty objectives high dimensionality variable space dtlz wfg high dimensionality objective space hand constraint functions general greatly increase difficulty solving cmops however far know several test suites ctp designed cmops ctp test problems capability adjusting difficulty constraint functions offer two types difficulties difficulty near pareto front difficulty entire search space test problem gives difficulty near constraint functions make search region close pareto front infeasible test problems provide optimizer difficulty entire search space test problems also commonly used benchmarks provide two types difficulties pfs part unconstrained pfs rest test problems difficulties near pfs many constrained pareto optimal points lie boundaries constraints even though cdp offer abovementioned advantages limitations number decision variables constraint functions extended difficulty level type adjustable constraint functions low ratios feasible regions entire search space suggested number objectives scalable used test problems include bnh tnk srn osy problems scalable number objectives difficult identify types difficulties paper propose general framework construct difficulty adjustable objective scalable cmops overcome limitations existing cmops cmops constructed toolkit classified three major types diversityhard cmops cmop type problem presents difficulty cmoeas find feasible solutions search space cmops usually small portions feasible regions entire search space addition cmops mainly suggest difficulty cmoeas approach pfs efficiently setting many obstacles pfs cmops mainly provide difficulty cmoeas distribute solutions along complete pfs work three types difficulty embedded cmops proper construction constraint functions summary contribution paper follows paper defines three primary types difficulty constraints cmops designing new constraint handling mechanisms cmoea one investigate nature constraints cmop cmoea aiming address including types levels difficulties embedded constraints therefore proper definition types difficulty constraints cmops necessary desirable paper also defines level difficulty regarding type difficulty constraints constructed cmops adjusted users difficulty level uniquely defined triplet parameter specifying level primary difficulty type respectively combination three primary constraint types different difficulty triplets lead construction large variety constraints cmops based proposed three primary types difficulty constraints nine difficulty adjustable cmops named constructed journal latex class files vol august remainder paper organized follows section discusses effects constraints pfs section iii introduces types levels difficulties provided constraints cmops section explains proposed toolkit construction methods generating constraints cmops different types levels difficulty section realizes scalability number objectives cmops using proposed toolkit section generates set difficulty adjustable cmops using proposed toolkit section vii performance two cmoeas different difficulty levels compared experimental studies section viii concludes paper ffects constraints constraints define infeasible regions search space leading different types levels difficulty resulting cmops major effects constraints pfs cmops include following infeasible regions make original unconstrained partially feasible divided two situations first situation constrained problem consists part unconstrained set solutions boundaries constraints illustrated fig second situation constrained problem part unconstrained illustrated fig infeasible regions block way towards illustrated fig complete original covered infeasible regions becomes feasible every constrained pareto optimal point lies constraint boundaries illustrated fig constraints may reduce dimensionality one example illustrated fig general although problem constraints make constrained particular case fig iii ifficulty types levels cmop three primary difficulty types identified including feasibilityhardness difficulty level primary difficulty type defined parameter ranging three difficulty levels corresponding three primary difficulty types respectively form triplet depicts nature difficulty cmop difficulty generally pfs cmops many discrete segments parts difficult achieved parts imposing large infeasible regions near result achieving complete difficult cmops difficulty cmops ratios feasible regions search space usually low difficult generate feasible solution cmoea feasibilityhard cmops often initial stage cmoea solutions population infeasible difficulty cmops hinder convergence cmoeas towards pfs usually cmoeas encounter difficulty approach pfs infeasible regions block way cmoeas converging pfs words generational distance metric indicates performance convergence difficult minimized evolutionary process difficulty level primary difficulty type difficulty level primary difficulty type defined parameter parameterized constraint function corresponding primary difficulty type parameter normalized three parameters corresponding difficulty level three primary difficulty types respectively form triplet exactly defines nature difficulty cmop constructed three parameterized constraint functions element triplet take value either simple combination three primary difficulty types give rise seven basic different difficulty types analogous simple combination three primary colors gives rise seven basic colors allow three parameters take value literally get countless difficulty nature analogous countless colors color space difficulty nature precisely depicted triplet construction toolkit know constructing cmop composed constructing two major parts objective functions constraint functions suggested general framework constructing objective functions stated follows two function called shape function called nonnegative distance function objective function sum shape function nonnegative distance function adopt method work terms constructing constraint functions three different types constraint functions suggested paper corresponding proposed three primary types difficulty cmops specifically constraint functions provide difficulty typeii constraint functions introduce difficulty feasibilityhardness constraint functions generate difficulty detailed definition journal latex class files vol august infeasible area without constraints constraints infeasible area constraints without constraints infeasible area constraints without constraints feasbile area constraints without constraints infeasbile area constraints without constraints fig illustration effects constraints pfs infeasible regions makes original unconstrained partially feasible many constrained pareto optimal solutions lie constraint boundaries infeasible regions makes original unconstrained partially feasible constrained part unconstrained infeasible regions blocks way converging constrained unconstrained complete original feasible every constrained pareto optimal solution lies constraint boundaries constraints reduce dimensionality optimization problem transformed constrained single optimization problem table basic difficulty types cmop basic difficulty types comment distributing feasible solutions complete difficult obtaining feasible solution difficult approaching pareto optimal solution difficult obtaining feasible solution complete difficult approaching pareto optimal solution complete difficult obtaining feasible solution approaching pareto optimal solution difficult obtaining pareto optimal solution complete difficult constraint functions fig illustration three primary difficulty types combination resulting seven basic difficulty types shown table using analogy three primary colors combination towards seven basic colors fig illustration combining three parameterized constraint functions using triplet composing three parameters three primary constraint functions correspond three primary difficulty types respectively three types constraint functions given detail follows constraint functions defined limit boundary specifically type constraint functions divides cmop number disconnected segments generating difficulty diversityhardness use parameter represent level difficulty means constraint functions impose effects cmop means constraint functions provide maximum effects example cmop suggested follows minimize minimize sin subject sin example set parameter indicating level difficulty set number disconnected segments controlled moreover value controls width segment width segments reaches maximum increases width segments decreases difficulty level increases parameter difficulty level result set shown fig shown observed width segments reduced keeps increasing width segments shrinks zero provides maximum level difficulty cmop cmop constraint functions also shown fig difficult level journal latex class files vol august seen constraint functions applied cmops means cmop scalability number objectives constructed using type constraints constraint functions constraint functions set limit reachable boundary distance function thereby control ratio feasible regions result typeii constraint functions generate difficulty feasibilityhardness use parameter represent level difficulty ranges means constraints weakest means constraint functions strongest example cmop constraint functions defined follows minimize minimize sin subject equals exp distance constrained unconstrained controlled example ratio feasible regions controlled feasible area reaches maximum shown fig feasible area decreased shown fig feasible area objective space small problem shown fig constraints also applied cmops three objectives shown fig constraint functions constraint functions limit reachable boundary objectives result infeasible regions act like blocking hindrance searching populations cmoeas approach result constraint functions generate difficulty use parameter represent level difficulty ranges means constraints weakest means constraints strongest difficulty level increases increases example cmop constraint functions defined follows min min sin cos sin sin cos level difficulty parameter defined shown fig infeasible regions increased shown fig infeasible regions become bigger shown fig constraints also applied cmops three objectives shown fig constraint functions expressed matrix form defined follows translation vector transformational matrix control degree rotation stretching vector according constraint functions expressed follows sin worthwhile point using approach extend number objectives three even though sophisticated visualization approach needed show resulting cmops objective space summarize three types constraint functions discussed correspond three primary difficulty types cmops respectively particular constraint function corresponds corresponds corresponds convergencehardness level primary difficulty type decided parameter work three parameters defined triplet specifies difficulty level particular difficulty type noteworthy point approach constructing toolkit cmops also scaled generate cmops three objective functions scalability number objectives discussed detail section calability number objectives recently optimization attracts lot research interests makes feature scalability number objectives cmops desirable general framework construct cmops scalability number objectives given borrow idea wfg toolkit construct objectives scaled number objectives specifically number objectives controlled parameter three different types constraint functions proposed section combined together scalable objectives construct difficulty adjustable scalable cmops specifically first constraint functions defined limit reachable boundary decision variable shape functions ability control difficulty level journal latex class files vol august without constraint feasible area constraint without constraint feasible area constraint feasible area constraint without constraint feasible area constraint fig illustrations influence constraint functions parameter difficulty level increases width segments decreases difficulty level cmop increases cmop constraint disconnect usually many discrete segments obtaining complete difficult thus cmop constraints cmop cmop cmop cmop without constraint constraint feasible area without constraint constraint feasible area without constraint constraint feasible area without constraint constraint feasible area default value set fig illustrations influence constraint functions parameter difficulty degree exp ratio feasible regions controlled parameter increases portion feasible regions decreases difficulty level feasibility increases constraint applied optimization problems constraint infeasible area constraint infeasible area constraint infeasible area constraint infeasible area fig illustrations influence constraint functions infeasible regions block way converging gray parts figure infeasible regions parameter adopted represent level difficulty ranges means constraints weakest means constraints strongest increases difficulty level cmop increases constraint also applied optimization problems constraint functions belong limit reachable boundary distance functions ability control difficulty level last constraint functions set directly objective belong generate number infeasible regions hinder working population cmoea approaching difficulty level generated constraint functions controlled rest parameters illustrated follows three parameters used control number type constraint functions respectively total number constraint functions controlled decides dimensions decision variables decides number disconnected segments indicates distance constrained unconstrained difficulty level controlled difficulty triplet component ranging parameter difficult triplet increases difficulty level increases worth noting number objectives dascmops easily scaled tuning parameter difficulty level also easily adjusted journal latex class files vol august assigning difficulty triplet three parameters ranging min min min sin odd cos even exp set difficulty adjustable scalable cmop section example set nine difficulty adjustable scalable cmops suggested proposed toolkit mentioned section constructing cmop composes constructing objective functions constraint functions according suggest nine functions including convex concave discrete shapes construct cmops set difficulty adjustable constraint functions generated nine difficulty adjustable scalable cmops named generated combining suggested objective functions generated constraint functions detailed definitions shown table table constraint functions also constraint functions difference different distance functions two objectives number objectives able scale two example three objectives constraint functions worth noting value difficulty triplet elements set users want difficulty levels need adjust parameters triplet elements values generate new set test instances vii experimental study experimental settings test performance cmoeas two commonly used cmoeas tested sixteen different difficulty triplets experiment descripted section three parameters defined triplet specifies difficulty level particular difficulty type specifically represents difficulty level denotes difficulty level feasibilityhardness indicates difficulty level convergencehardness difficulty triplets listed table iii detailed parameters algorithms summarized follows setting reproduction operators mutation probability number decision variables polynomial mutation operator distribution index set simulated binary crossover sbx operator distribution index set rate crossover population size number runs stopping condition algorithm runs times independently test problem sixteen different difficulty triplets maximum function evaluations neighborhood size probability use select neighborhood maximal number solutions replaced child performance metric measure performance different difficulty triplets inverted generation distance igd adopted detailed definition igd given follows inverted generational distance igd igd metric simultaneously reflects performance convergence diversity defined follows igd min ideal set approximate set achieved algorithm represents number objectives worth noting smaller value igd represents better performance diversity convergence performance comparisons table presents statistic results igd values observe difficulty triplets significantly better indicates suitable solving difficulty triplets journal latex class files vol august table est suite objective functions constraint functions problem objectives min min min min min min min min min min min min sin cos odd even sin cos odd even sin sin cos odd even cos cos sin cos min min min cos min min min min min min cos cos cos sin sin cos cos cos cos sin sin cos constraints sin cos sin sin cos exp sin cos sin sin cos exp sin cos exp journal latex class files vol august table iii difficulty triplets difficulty triplets significantly better indicates suitable solving simultaneous feasibilityand difficulty triplets significantly better final populations best igd values independent runs using difficulty triplets plotted fig observe type constraint functions indeed generates corresponding difficulties increasing elements difficulty triplet problem difficult solve illustrated fig fig example difficulty triplets significantly better difficulty triplets significantly better indicates suitable solving difficulty triplet also significantly better simultaneous significantly better fig shows final populations best igd values independent runs using difficulty triplets achieve parts pfs increasing element difficulty triplet difficult find whole pfs difficulty triplet constraints significantly better difficulty triplets significantly better indicates suitable solving larger difficulty levels terms difficulty triplet performs better example difficulty triplets cdp significantly better simultaneous significantly better fig shows final populations best igd values independent runs using difficulty triplets find whole pfs increasing difficulty triplets becoming difficult solve illustrated fig fig statistic results igd values shown table table observe significantly better difficulty triplet words works better without constraints difficulty triplets significantly better difficulty triplets performs significantly better difficulty triplets also performs significantly better difficulty triplets performs significantly better significantly better difficulty triplet significantly better difficulty triplets significantly better rest difficulty triplets significantly difference performance comparisons statistic results igd values presented table without constraints difficulty triplet significantly better example difficulty triplets also significantly better difficulty triplets significantly better simultaneous journal latex class files vol august performs significantly better example difficulty triplets performs significantly better difficulty triplet also significantly better rest difficulty triplets better significantly better difficulty triplets performs significantly better convergenceor difficulty triplets significantly better simultaneous convergencehardness performs significantly better final populations best igd values independent runs using difficulty triplets plotted fig observe achieve parts pfs analysis experimental results performance comparisons nine test instances clear type constraint functions generates corresponding difficulties increasing elements difficulty triplet problem becoming difficult solve furthermore concluded performs better performs better dascmops case simultaneous performs better test instances viii onclusion work proposed construction toolkit build difficulty adjustable scalable cmops method used design construction toolkit based three primary constraint functions identified correspond three primary difficulty types method also scalable number objectives constraints conveniently extended example set dascmops generated using construction toolkit verify effectiveness suggested test instances comprehensive experiments conducted test performance two popular cmoeas different difficulty triplets analyzing performance two test algorithms found three primary types difficulties exist corresponding test problems algorithms test showed different behaviors reaching pfs observation demonstrates proposed method constructing cmops efficient effective help evaluate performance tested algorithms acknowledgment research work supported guangdong key laboratory digital signal image processing national natural science foundation china grant jiangsu natural science foundation science technology planning project guangdong province china eferences runarsson yao search biases constrained evolutionary optimization systems man cybernetics part applications reviews ieee transactions vol coello coello theoretical numerical techniques used evolutionary algorithms survey state art computer methods applied mechanics engineering vol hoffmeister sprave handling constraints use metric penalty functions joines houck use penalty functions solve nonlinear constrained optimization problems evolutionary computation ieee world congress computational proceedings first ieee conference ieee huang wang effective differential evolution constrained optimization applied mathematics computation vol bean ben dual genetic algorithm bounded integer programs coit smith tate adaptive penalty methods genetic optimization constrained combinatorial problems informs journal computing vol ben bean genetic algorithm integer program operations research vol tessema yen self adaptive penalty function based algorithm constrained optimization evolutionary computation cec ieee congress ieee woldesenbet yen tessema constraint handling multiobjective evolutionary optimization evolutionary computation ieee transactions vol runarsson yao stochastic ranking constrained evolutionary optimization evolutionary computation ieee transactions vol deb efficient constraint handling method genetic algorithms computer methods applied mechanics engineering vol laumanns thiele zitzler efficient adaptive parameter variation scheme metaheuristics based method european journal operational research vol takahama sakai iwane constrained optimization constrained hybrid algorithm particle swarm optimization genetic algorithm advances artificial intelligence springer ray singh isaacs smith infeasibility driven evolutionary algorithm constrained optimization constrainthandling evolutionary optimization springer surry radcliffe comoga method constrained optimisation genetic algorithms control cybernetics vol cai wang multiobjective evolutionary algorithm constrained optimization evolutionary computation ieee transactions vol journal latex class files vol august table ean standard deviation values obtained ilcoxon rank sum test significance level performed denote performance significantly worse better respectively true true true true true true true true true true instance difficulty triplet fig final populations best values independent runs using different difficulty triplets show best populations achieved difficulty triplets respectively journal latex class files vol august table ean standard deviation values obtained ilcoxon rank sum test significance level performed denote performance significantly worse better respectively true true true true true true true true true true instance difficulty triplet fig final populations best values independent runs using different difficulty triplets show best populations achieved difficulty triplets respectively journal latex class files vol august table ean standard deviation values obtained ilcoxon rank sum test significance level performed denote performance significantly worse better respectively true true true true true true true true true true instance difficulty triplet fig final populations best values independent runs using different difficulty triplets show best populations achieved difficulty triplets respectively journal latex class files vol august true true true true true true true true true true fig final populations best values independent runs using different difficulty triplets show best populations achieved difficulty triplets respectively wang cai zhou zeng adaptive tradeoff model constrained evolutionary optimization evolutionary computation ieee transactions vol suganthan constrained optimization algorithm ensemble constraint handling methods engineering optimization vol deb pratap agarwal meyarivan fast elitist multiobjective genetic algorithm evolutionary computation ieee transactions vol corne jerram knowles oates selection evolutionary multiobjective optimization proceedings genetic evolutionary computation conference gecco citeseer zitzler laumanns thiele zitzler zitzler thiele thiele improving strength pareto evolutionary algorithm zhang multiobjective evolutionary algorithm based decomposition evolutionary computation ieee transactions vol zhang multiobjective optimization problems complicated pareto sets evolutionary computation ieee transactions vol liu zhang decomposition multiobjective optimization problem number simple multiobjective subproblems evolutionary computation ieee transactions vol cai fan zhang external archive guided multiobjective evolutionary algorithm based decomposition combinatorial optimization evolutionary computation ieee transactions vol aug zitzler selection multiobjective search parallel problem solving viii springer phan suzuki indicator based evolutionary algorithm multiobjective optimization evolutionary computation cec ieee congress ieee beume naujoks emmerich multiobjective selection based dominated hypervolume european journal operational research vol bader zitzler hype algorithm fast hypervolumebased optimization evolutionary computation vol deb genetic algorithms problem difficulties construction test problems evolutionary computation vol zhang multiobjective optimization problems complicated pareto sets ieee trans evolutionary computation deb thiele laumanns zitzler scalable test problems evolutionary multiobjective optimization springer liu zhang decomposition multiobjective optimization problem number simple multiobjective problems ieee transactions evolutionary computation vol liu chen deb goodman investigating effect imbalance convergence diversity evolutionary multiobjective algorithms ieee transactions evolutionary computation vol cheng jin olhofer sendhoff test problems multiobjective optimization ieee transactions cybernetics huband hingston barone review multiobjective test problems scalable test problem toolkit evolutionary computation ieee transactions vol deb optimization using evolutionary algorithms john wiley sons vol zhang zhou zhao suganthan liu tiwari multiobjective optimization test instances special session competition university essex colchester nanyang technological university singapore special session performance assessment optimization algorithms technical report binh korn mobes multiobjective evolution strategy constrained optimization problems third international conference genetic algorithms mendel vol citeseer tanaka watanabe furukawa tanino decision support system multicriteria optimization systems man cybernetics intelligent systems ieee international conference vol ieee srinvas deb function optimization using sorting genetic algorithms evolutionary computation vol osyczka kundu new method solve generalized multicriteria optimization problems using simple genetic algorithm structural optimization vol jain deb evolutionary optimization algorithm using based nondominated sorting approach part handling constraints extending adaptive ieee trans evolutionary computation vol van veldhuizen lamont evolutionary computation convergence pareto front late breaking papers genetic programming conference citeseer zhang deng multiobjective test problems complicated pareto fronts difficulties degeneracy evolutionary computation cec ieee congress ieee bosman thierens balance proximity diversity multiobjective evolutionary algorithms evolutionary computation ieee transactions vol
9
oct sharpening jensen inequality liao arthur berg division biostatistics bioinformatics penn state university college medicine october abstract paper proposes new sharpened version jensen inequality proposed new bound simple insightful broadly applicable imposing minimum assumptions provides fairly accurate result spite simple form applications moment generating function power mean inequalities raoblackwell estimation presented presentation incorporated statistical course keywords jensen gap power mean inequality estimator taylor series introduction jensen inequality fundamental inequality mathematics underlies many important statistical proofs concepts standard applications include derivation mean inequality kullback leibler divergence convergence property algorithm dempster jensen inequality covered major statistical textbooks casella berger section wasserman section basic mathematical tool statistics let random variable finite expectation let convex function jensen inequality jensen establishes inequality however sharp unless var linear function therefore substantial room advancement paper proposes new sharper bound jensen gap improvements jensen inequality developed recently see example walker abramovich persson horvath references cited therein proposed bound however following advantages first simple easy use insightful form terms second derivative var time gives fairly accurate results several examples many previously published improvements however much complicated form much involved use even difficult compute discussed walker second method requires existence therefore broadly applicable contrast methods require admit power series representation positive coefficients abramovich persson dragomir walker require abramovich third provide lower bound upper bound single formula incorporated materials paper classroom teaching slightly increased technical level lecture time able present much sharper version jensen inequality significantly enhances students understanding underlying concepts main result theorem let random variable mean let twice differentiable function define function inf var sup var proof let cumulative distribution function applying taylor theorem form remainder gives explicitly solving gives defined therefore result follows inf theorem also holds inf replaced inf sup replaced sup since inf inf sup sup less tight bounds implied economics working paper becker lower upper bounds general form var depends similar forms bounds presented abramovich persson dragomir walker theorem much simpler applies wider class inequality implies jensen inequality note also jensen inequality sharp linear whereas inequality sharp quadratic function applications moments present unknown although random sample underlying distribution available version theorem suitable situation given following corollary corollary let datapoints let inf sup min max proof consider discrete random variable probability distribution var corollary follows application theorem lemma convex monotonically increasing concave monotonically decreasing proof prove convex analogous result concave follows similarly note suffices prove without loss generality assume convexity gives therefore result follows lemma makes theorem easy use follow results hold inf lim convex sup lim inf lim concave sup lim note limits either finite infinite proof lemma borrows ideas bennish examples functions convex include exp examples functions concave include log examples example moment generating function random variable supported finite variance bound moment generating function etx using theorem get inf var etx ete sup var etx inf lim sup lim theorem provides improvement jensen inequality however finite domain random variable significant improvement lower bound possible inf similar results hold apply example walker exponential random variable mean etx actual jensen gap etx since var etx lim less sharp lower bound using inf utilizing elaborate approximations numerical optimizations walker yielded accurate lower bound example arithmetic geometric mean let positive random variable interval mean note log convex whose derivative concave applying theorem lemma leads lim var log log lim var log log consider sample positive data points let arithmetic mean geometric mean applying corollary gives exp exp defined corollary give numerical results generated random numbers uniform distribution numbers arithmetic mean geometric mean inequality becomes fairly tight bounds replacing leads less accurate lower bound upper bound example power mean let positive random variable positive interval mean real number define power mean jensen inequality establishes increasing function give sharper inequality applying theorem let note applying theorem leads inf var sup var apply lemma note convex concave noted section applying result case therefore lim var lim var sequence generated example applying corollary leads note upper bound much smaller arithmetic mean jensen inequality replacing leads less accurate lower bound recent article published american statistician carvalho revisited kolmogorov formulation generalized mean continuous monotone function inverse example corresponds log example corresponds also apply theorem bound general function example estimator theorem theorem casella berger theorem wasserman basic result statistical estimation let estimator loss function convex sufficient statistic estimator satisifies following inequality risk function improve inequality applying theorem respect conditional distribution given inf var function defined theorem taking expectations gives inf var particular loss var using original jensen inequality establishes cruder inequality equation improved bounds partitioning discussed example theorem improve jensen inequality inf cases often sharpen bounds partitioning domain following approach used walker let follows law total expectation let discrete random variable distribution easy see follows theorem inf var also apply theorem term inf var combining two equations inf var inf var replacing inf sup righthand side gives upper bound jensen gap left side positive terms right positive particular jensen gap positive exists interval satisfies inf var note finer partition necessarily lead sharper lower bound focus partition therefore isolating part interval close consider example divide three intervals equal probabilities gives var inf actual jensen gap lower bound huge improvement jensen bound upper bound however provides improvement theorem summarize paper proposes new sharpened version jensen inequality proposed bound simple insightful broadly applicable imposing minimum assumptions provides fairly accurate result spite simple form incorporated statistical course references abramovich persson new estimates jensen gap journal inequalities applications abramovich persson samko new scales refined jensen hardy type inequalities mathematical inequalities applications becker variance drain jensen inequality technical report caepr working paper available ssrn https http bennish proof jensens inequality missouri journal mathematical sciences casella berger statistical inference volume duxbury pacific grove carvalho mean mean american statistician dempster laird rubin maximum likelihood incomplete data via algorithm journal royal statistical society series methodological dragomir jensen integral inequality power series nonnegative coefficients applications rgmia res rep collect horvath khan pecaric refinement jensen inequality operator convex functions advances inequalities applications jensen sur les fonctions convexes les entre les valeurs moyennes acta mathematica walker lower bound jensen inequality siam journal mathematical analysis wasserman statistics concise course statistical inference springer science business media
10
stochastic uber work sina dehghani soheil ehsani may vahid liaghat mohammadtaghi hajiaghayi saeed seddighin abstract paper study stochastic variant celebrated problem kserver problem required minimize total movement servers serving online sequence requests metric stochastic setting given independent distributions advance every time step request drawn designing optimal online algorithm setting therefore emphasis work designing approximately optimal online algorithm first show structural characterization certain class online algorithms prove general metrics best algorithms cost worse three times optimal online algorithm next present integer program finds optimal algorithm class arbitrary metric finally rounding solution linear relaxation program present online algorithm stochastic problem approximation factor line circle metrics factor log general metric size way achieve approximation factor independent number servers moreover define uber problem motivated extraordinary growth online network transportation services uber problem demand consists two points source metric serving demand move server source destination objective minimizing total movement given servers show given algorithm problem obtain algorithm uber problem motivated fact demands usually highly correlated time day week time day demand arrived study stochastic uber problem using results stochastic obtain algorithm stochastic uber problem line circle metrics log algorithm general metrics furthermore extend results correlated setting probability request arriving certain point depends time step also previously arrived requests introduction problem one fundamental problems online computation extensively studied past decades problem mobile servers metric space receive online sequence requests ith request point upon arrival need move server cost equal distance university maryland supported part nsf career award nsf bigdata grant nsf medium grant darpa grant another darpa simplex grant facebook current position server goal minimize total cost serving requests manasse mcgeoch sleator introduced problem natural generalization several online problems building block problems metrical task systems considered adversarial model online algorithm knowledge future requests following proposition sleator tarjan evaluate performance online algorithm using competitive analysis model online algorithm alg compared offline optimum algorithm opt aware entire input advance sequence requests let denote total cost alg opt serving algorithm every independent manasse showed lower bound competitive ratio deterministic algorithm metric space least points celebrated conjecture states bound tight general metrics several years known upper bounds exponential major breakthrough achieved koutsoupias papadimitriou showed work function algorithm proving tight competitive ratio holy grail field past two decades challenge led study problem special spaces uniform metric also known paging problem line circle trees metrics see references therein also refer reader section short survey randomized algorithms particularly recent result bansal buchbinder madry naor achieves competitive ratio discrete metrics comprise points line metric euclidean metric space particular interest developing new ideas chrobak karloof payne vishwnathan first settle conjecture line designing elegant algorithm chrobak larmore generalized approach tree metrics later bartal koutsoupias proved work function algorithm also line focusing special case line bartal show using randomized algorithms one break barrier lower bound giving algorithm case two servers despite strong lower bounds problem heuristics algorithms constant competitive practice example paging special case uniform least recently used lru strategy shown experimentally constant competitive see section paper present algorithm run real world data measure empirical performance particular use distribution car accidents obtained road safety data experiments illustrate algorithm performing even better practice idea comparing performance online algorithm future offline optimum led crisp clean solutions however without downsides results online model often pessimistic leading theoretical guarantees hardly comparable experimental results indeed one way tighten gap use stochastic information input data describe paper also point competitive analysis possible necessarily suitable approach problem since distributions input generated known one use dynamic programming enumeration future events derive optimal movement servers unfortunately finding optimal online solution using distributions problem thus dynamic programming approach takes reduction stochastic find median set vertices one construct instance stochastic every best initialization servers exponential time raises question well one perform comparison best online solution rest paper formally define model address question natural generalization assume demands two points instead one consisting source destination serve demand need move server source move destination call problem uber problem one see uber problem sources destinations also show given algorithm problem obtain algorithm uber problem thus results also apply uber problem stochastic model paper study stochastic problem input chosen adversarially consists draws given probability distributions problem lots applications network transportations equipment replacement data centers current mega data centers contain hundreds thousands servers switches limited example servers usually retire three years efficient way scale maintenance data centers automation robots designed handle maintenance tasks repairs manual operations servers replacement process modeled requests satisfied robots robots modeled servers problem also applications physical networks example suppose model shopping service google express problem receive online sequence shopping requests different stores shopping cars servers serve requests traveling stores quiet natural assume certain time requests arrive distribution discovered analyzing history example uber request likely suburb midtown morning midtown suburb night formalize stochastic information follows every discrete probability distribution given advance request drawn time step distributions chosen adversary assumed independent necessarily identical model inspired model prophet inequalities mentioned case line metric proven interesting restricted case studying problem paper focus mainly class line metric though results carry circle metric general metrics well adversarial model competitive ratio seems notion analyzing performance online algorithms however presence stochastic information one derive much better benchmark allows make distinctions online algorithms recall offline setting class algorithms natural notion measure performance algorithm alg approximation ratio defined worse case ratio opt optimal algorithm class paper also measure performance online algorithm approximation compared optimal online solution note given distributions one iteratively compute optimal online solution solving following dynamic gives optimum solution prophet inequality setting given necessarily identical distributions online sequence values drawn onlooker choose one item succession values revealed step onlooker choose value time arrival goal maximize chosen value program every every possible placement servers called configuration metric let denote minimum expected cost online algorithm serving first requests moving servers configuration note inductively computed via following recursive formula min eri min distance subject serving initially zero every results first main result designing constant approximation algorithm line metric distributions different time steps necessarily identical theorem exists online algorithm stochastic problem line metric running time polynomial sum sizes supports input distributions guarantee holds circle metric general metric present algorithm logarithmic approximation guarantee theorem exists log online algorithm stochastic problem general metric size prove theorems using two important structural results first key ingredient general reduction class online algorithms restricted class algorithms losing constant factor approximation ratio recall configuration placement metric say algorithm alg follows following procedure alg sequence configurations start placing upon arrival move servers configuration next move closest server finally iii return original position first prove following structural result theorem stochastic problem general metric optimal online algorithm within optimal online algorithm using aforementioned reduction focus designing optimal algorithm begin formulating problem integer program second ingredient use relaxation program formalize natural fractional variant problem variant configuration fractional assignment server mass points metric total mass serve request point need move mass least one amount server mass cost moving server mass naturally defined integral movement infinitesimal pieces server mass solving linear relaxation integer program achieve optimal fractional algorithm finally prove theorems leveraging following rounding techniques rounding method line also observed provide proof case line section sake completeness rounding method general metrics via wellknown embedding metric distribution trees losing logarithmic factor distortion bansal use natural rounding method similar blum burch kalai show fractional movement trees rounded integral counterpart losing constant factor theorem first proven let algf denote fractional algorithm line circle one use algf derive randomized integral algorithm alg every request sequence expectation internal randomness alg furthermore stochastic model alg derandomized theorem proven let algf denote fractional algorithm metric one use algf derive randomized integral algorithm alg every request sequence log show stochastic setting number possible input scenarios even distributions correlated one compute best fractional online competitive algorithm time polynomial note since number placements servers points exponential possible enumerate possible choices online algorithm solve problem presenting relaxation problem size polynomial therefore obtaining following result present formal model analysis appendix theorem optimal online algorithm stochastic problem correlated setting line circle computed polynomial time number possible scenarios general metrics log algorithm obtained also show algorithm obtain approximation uber problem using simple reduction theorem let alg denote algorithm one use alg derive algorithm uber problem proof consider instance uber problem let denote source destination respectively generate instance problem removing every words demands use alg provide solution follows satisfying demand use alg move server using shortest path move server return back let optu optk denote cost optimal solutions respectively let denote distance metric let denote total movement servers optu optk optu optk optu related work randomized algorithms often perform much better online paradigm problem lower bound log shown competitive ratio randomized algorithms common metrics despite exponential gap compared lower bound deterministic algorithms little known competitiveness randomized algorithms fact known algorithms competitive ratios work either uniform metric also known paging problem metric comprising points two servers line two decades introduction problem major breakthrough achieved bansal discrete metrics size comprise points randomized algorithm achieves competitive ratio case uniform metric extensively studied various stochastic models motivated applications computer caching koutsoupias papadimitriou consider two refinements competitive analysis server problems first consider diffuse adversary model model every step adversary chooses distribution uniform metric paging problem ith request drawn needs served distribution known online algorithm may depend previous requests however paper consider case wherein guaranteed every point small enough next request predictable absolute certainty adversary results koutsoupias papadimitriou later young shows optimum competitive ratio setting close second refinement introduced restricts optimal solution lookahead hence one define comparative ratio indicates ratio cost best online solution best solution lookahead show problem generally metrical task system problem online algorithms admit comparative ratio instances ratio tight various models restricting adversary access graph model fault rate model etc also considered paging problem see references therein survey results unfortunately many stochastic settings considered paging problem seem natural generalization beyond uniform metric setting example diffuse adversary model studied distributions weaken adversary general metric paper look approximation algorithms class online algorithms access distributions would like mention various online problems previously considered prophet inequality model model distributions identical maximum matching problem scheduling online network design extensively studied models see graph connectivity problems garg gupta leonardi sankowski consider online variants steiner tree several related problems stochastic model adversarial model exists log lower bound competitive ratio online algorithm number demands however garg show assumption problems admit online algorithms constant log log competitive ratios refer reader excellent book borodin elyaniv study online problems preliminaries section formally define stochastic problem classical problem defined metric consists points could infinitely many every two points metric let denote distance symmetric function satisfies triangle inequality precisely every three points problem goal place servers points metric move servers satisfy requests refer every placement servers metric points configuration let sequence requests goal kserver problem find configurations every exists server point configuration say list configurations valid given list requests valid sequence configurations optimal minimized stands minimum cost moving servers configuration configuration optimal sequence configurations called optimal offline solution ofks known advance refer optimal cost movements ofks also define notion fractional configuration assignment metric points real numbers precisely number specifies mass fractional server point every fractional solution adheres following condition total sum values assigned points exactly equal analogously fractional configuration serves request mass size least server assigned point offline fractional solution problem given sequence requests defined sequence fractional configurations serves online problem however given whole sequence requests beginning informed every request realization drawn algorithm online algorithm problem reports configuration initial configuration upon realization every request returns configuration valid deterministic generates unique sequence configurations every sequence requests let sequence generates requests denote cost online stochastic problem addition metric also given independent probability distributions show probability every request realized point metric time algorithm online algorithm setting generates configuration every request solely based also respect probability distributions similarly define cost online algorithm given sequence requests define expected cost algorithm metric probability distributions every metric probability distributions refer online algorithm minimum expected cost optm alternative way represent solution problem vector configurations necessarily serve request cost solution equal minimum distance server configuration request additional cost thought moving server serve returning back original position thus every representation solution transformed representation similarly fractional configuration minimum cost incurred placing mass server point use letter configurations solutions throughout paper paper emphasis stochastic problem line metric define line metric metric points distance two points always equal moreover show deterministic algorithms powerful randomized algorithms setting therefore focus deterministic algorithms paper thus omit term deterministic every time use word algorithm mean deterministic algorithm unless otherwise explicitly mentioned structural characterization recall online algorithm fulfill task reporting configuration upon arrival request based say algorithm request oblivious reports configuration regardless request generates configurations sequence requests cost configuration precisely matter request generate configuration given list past configurations given sequence past requests sequence probability distributions following show every online algorithm turn request oblivious algorithm cost given sequence requests lemma let online algorithm stochastic problem metric exists request oblivious algorithm proof let sequence requests define online algorithm follows configuration reports given list input arguments output algorithm inputs input except dropped sequence requests show cost algorithm input times cost input let sequence configurations generates requests output algorithm according construction note algorithm assume every serves request definition cost solution equal since moreover since every servers request hence inequality along equation implies since holds requests proof complete immediate corollary lemma optimal request oblivious algorithm cost optm therefore focus request oblivious algorithms lose factor comparison optimal online algorithm following lemma states key structural lemma optimal request oblivious algorithm lemma every request oblivious algorithm exists randomized request oblivious algorithm expected cost oblivious last request also oblivious requests come prior proof given request oblivious online algorithm construct online algorithm oblivious requests follows input configurations probability distributions draw sequence requests conditioned constraint would generate configurations requests report output inputs define cost step algorithm due construction algorithm expected cost algorithm every step random sequence requests equal expected cost algorithm random sequence requests drawn therefore expected cost algorithms random sequence requests equal thus lemma states always exists optimal randomized request oblivious online algorithm returns configurations regardless requests call algorithm since algorithm indifferent sequence requests assume always generates sequence configurations based distributions optimal algorithms sequence configurations optimal well therefore always exists optimal online algorithm deterministic lemma know optimal request oblivious algorithm also holds optimal algorithm theorem exists sequence configurations online algorithm starts always returns configuration upon arrival request opproximation factor fractional solutions section provide fractional online algorithm problem implemented polynomial time note theorem know exist configurations expected cost algorithm always returns configurations times cost optimal online algorithm therefore write integer program find configurations least expected cost next provide relaxed integer program show every feasible solution corresponds fractional online algorithm stochastic problem hence solving linear program done polynomial time gives fractional online algorithm problem linear program recall given independent distributions online stochastic adaptive algorithm represented configurations upon arrival request move servers configuration one server serves goes back position objective find configurations cost moving new configurations addition expected cost serving requests minimized therefore problem formulated offline manner first provide integer program order find vector configurations least cost decision variables program represent configurations movement servers one configuration another way possible request served particular time step node variable denoting number servers node pair nodes movement variable denoting number servers going next round node possible request node variable denoting whether served following integer program first set constraints ensures number servers nodes time updated correctly according movement variables second set constraints ensures possible request served least one server third set constraints ensures possible request served ptby definition empty node cost sequence configurations thus objective minimize expression xxx denotes probability requested time xxx min consider following relaxation integer program min xxx reduction integral fractional section show obtain integral algorithm stochastic problem fractional algorithm first show every fractional algorithm line metric modified integral algorithm cost next study problem hst metrics give rounding method produces integral algorithm fractional algorithm losing constant factor finally leverage previously known embedding techniques show every metric embedded hst distortion log lead rounding method obtaining integral algorithm every fractional algorithm general metrics losing factor log combining approximation fractional algorithm provide section achieve log approximation algorithm stochastic problem general graphs integrals strong fractionals line section show every fractional algorithm line metric derandomized integral solution expected cost rounding method follows every fractional configuration provide integral configuration distance two configurations equal expected distance two configurations every point metric server mass size least exists server point let every point metric denote amount server mass node line every fractional configuration define mass function follows minimum integer intuitively one gathers server mass sweeping line left right first position gathered amount server mass rounding algorithm follows pick random real number interval contains servers positions note rounding method uses configurations precisely draw first use number construct integral configurations fractional configurations following two lemmas show properties hold rounding algorithm proposed lemma let fractional configuration point server proof due construction rounding method every two consecutive servers total mass servers fractional solution less therefore put server point otherwise total mass servers fractional solution first server first server would least next lemma shows rounding preserves distances configurations expectation lemma let two fractional configurations distance following holds distances configurations proof key point behind proof lemma distance two fractional configurations formulated follows stands integral configurations places servers points since beginning rounding method draw uniformly random expected distance two rounded configurations exactly equal equal distance theorem given fractional online algorithm problem line metric exists online integral solution problem expected cost reduction general graphs hst undirected rooted tree every leaf represents point metric distance pair points metric equal distance corresponding leaves tree hst weights edges uniquely determined depth vertices connect precisely weight edges vertex children equal stands height tree denotes depth vertex since hsts well structured designing algorithms hsts relatively easier comparison complex metric therefore classic method alleviating complexity problems first embed metrics hsts low distortion solve problems trees perhaps important property hsts following observation every pair leaves hst distance uniquely determined depth deepest common ancestor note higher depth common ancestor lower distance leaves therefore closest leaves leaf ones share common ancestors bansal propose method rounding every fractional solution problem integral solution losing constant factor theorem let leaves let sequence fractional configurations online procedure maintains sequence randomized configurations satisfying following two properties time state consistent fractional state fractional state changes time incurring movement cost state modified state incurring cost expectation embedding general metrics trees particular hsts subject many studies seminal work fakcharoenphol shown metric randomly log embedded distortion theorem exists probabilistic method embed arbitrary metric hsts distortion therefore round fractional solution general metric first embed distortion log round solution losing constant factor give integral algorithm expected cost log times optimal theorem given fractional online algorithm problem arbitrary metric exists online integral solution problem cost worse log times cost expectation acknowledgment would like thank shi helpful discussions references abolhasani ehsani esfandiari hajiaghayi kleinberg lucier beating ordered prophets arxiv preprint achlioptas chrobak noga competitive analysis randomized paging algorithms theoretical computer science alaei hajiaghayi liaghat online matching applications allocation proceedings acm conference electronic commerce pages acm alaei hajiaghayi liaghat online stochastic generalized assignment problem approximation randomization combinatorial optimization algorithms techniques pages springer alaei hajiaghayi liaghat pei saha adcell allocation cellular networks pages springer albers favrholdt giel paging locality reference proceedings annual acm symposium theory computing pages acm bansal buchbinder madry naor algorithm problem bansal buchbinder naor randomized algorithm weighted paging journal acm jacm bartal chrobak larmore randomized algorithm two servers line information computation bartal koutsoupias competitive ratio work function algorithm problem theoretical computer science becchetti modeling locality probabilistic analysis lru fwf pages springer blum burch kalai paging foundations computer science annual symposium borodin online computation competitive analysis cambridge university press borodin irani raghavan schieber competitive paging locality reference journal computer system sciences chrobak karloff payne vishwnathan new ressults server problems siam journal discrete mathematics chrobak larmore optimal algorithm servers trees siam journal computing dehghani ehsani hajiaghayi liaghat seddighin online survivable network design prophets dehghani kash key online stochastic scheduling pricing clouds denning working set model program behavior communications acm fakcharoenphol rao talwar tight bound approximating arbitrary metrics tree metrics proceedings annual acm symposium theory computing pages acm fiat karp luby mcgeoch sleator young competitive paging algorithms journal algorithms fiat mendel truly online paging locality reference foundations computer science annual symposium pages ieee fiat mendel better algorithms unfair metrical task systems applications siam journal computing garg gupta leonardi sankowski stochastic analyses online combinatorial optimization problems proceedings nineteenth annual symposium discrete algorithms pages society industrial applied mathematics hajiaghayi kleinberg sandholm automated online mechanism design prophet inequalities aaai volume pages irani karlin phillips strongly competitive algorithms paging locality reference siam journal computing karlin phillips raghavan markov paging siam journal computing karloff rabani ravid lower bounds randomized motionplanning algorithms siam journal computing koutsoupias papadimitriou conjecture journal acm jacm krengel sucheston semiamarts finite values bull amer math soc manasse mcgeoch sleator competitive algorithms server problems journal algorithms mcgeoch sleator strongly competitive randomized paging algorithm algorithmica panagiotou souza adequate performance measures paging proceedings annual acm symposium theory computing pages acm sleator tarjan amortized efficiency list update paging rules communications acm problem fractional analysis phd thesis masters thesis university chicago http uchicago pdf young bounding diffuse adversary soda volume pages correlated setting section study problem probability distributions independent recall independent setting sequence requests referred correlated model assume different possibilities given form set sequences moreover assume probability scenario denoted given advance given list different scenarios probabilities goal design online algorithm serve request prior arrival next request overall movement servers minimized model problem integer program first write integer program show every solution program uniquely mapped deterministic online algorithms problem moreover every online algorithm mapped feasible solution program precisely solution program equivalent online algorithm problem furthermore show derive online algorithm solution integer program two imply optimal deterministic online algorithm obtained optimal solution program program better convey idea behind integer program first introduce tree trie containing sequences let use denote path root node notations node represents request may occur conditioning requests occur beforehand besides every leaf uniquely represents one let use denote set indices leaf subtree step final option hence new request informative since know none occur node define probability requests happening extend tree adding additional nodes shown figure nodes form path leading root nodes plus root represent initial configuration servers let call nodes initial set show movement servers metric space means tokens begin putting one token nodes token corresponds one servers server moves serve request move corresponding token node represents request note step discrimination sequences terms occurrence causes deterministic online algorithm serve first requests way going one result uniquely serving use downward links order show request gets served next paragraphs explain links construct integer program let use denote link node descendant one uses server serve without using server serve request consecutive serving may occur probability case algorithm moves server pays distance cost two points metric space corresponding two conditions links must care first since request served server least one one without loss generality assume exactly one need serve request one server second serving request server serving one request one condition guarantees serving sequence requests server serves always one request next serving request following integer program maintains conditions expected overall movement servers objective function min next relax constraints program make linear therefore instead assigning either let real number thus integer program turns following linear program objective function relaxed constraints min note every feasible solution linear program corresponding fractional solution problem since optimal solution linear program found polynomial time using rounding methods presented section obtain optimal online algorithm line metric log approximation algorithm general metrics stated theorem experimental results goal section make evaluation method line real world data set line appropriate model plenty applications example could sending road maintenance trucks different points road sending emergency vehicles accident scenes along highway experiment take case car accidents data sets use road safety find distribution accidents along road great britain accidents occurred highway average accidents per month assume point every miles along highway points total build distributions respect accidents spread days month way achieve distributions points along line https https road great britain number servers algorithm optimum table running time algorithm optimum algorithm seconds higher number servers optimum solution calculable within hours algorithms compare performance method optimum algorithm find optimum solution use backtracking running time algorithm exponential however use techniques branch bound exponential dynamic programming get fast implementation results run different experiments line distributions explained previous sections showed upper bound approximation factor algorithm interestingly experiments observe better performance shown figure compare running time algorithms table note size method solvers vary fact reason behind running time remains almost contrast running time optimum algorithm grows exponentially figure performance algorithm compared optimum dashed curve indicates two times optimum
8
foundations algorithm configuration combinatorial partitioning balcan vaishnavh nagarajan ellen vitercik colin white may may abstract clustering many partitioning problems significant importance machine learning scientific fields reality motivated researchers develop wealth approximation algorithms heuristics although best algorithm use typically depends specific application domain analysis often used compare algorithms may misleading instances occur infrequently thus demand optimization methods return algorithm configuration best suited given application typical inputs address problem clustering partitioning problems integer quadratic programming designing computationally efficient sample efficient learning algorithms receive samples distribution problem instances learn partitioning algorithm high expected performance algorithms learn common integer quadratic programming clustering algorithm families sdp rounding algorithms agglomerative clustering algorithms dynamic programming sample complexity analysis provide tight bounds pseudodimension algorithm classes show surprisingly even classes algorithms parameterized single parameter superconstant way work contributes foundations algorithm configuration pushes boundaries learning theory since algorithm classes analyze consist optimization procedures significantly complex classes typically studied learning theory authors addresses ninamf vaishnavh vitercik crwhite introduction problems arise variety diverse oftentimes unrelated application domains example clustering problem unsupervised machine learning used group protein sequences function organize documents databases subject choose best locations fire stations city although underlying objective typical problem instance one setting may significantly different another causing approximation algorithms inconsistent performance across different application domains study characterize algorithms best contexts task often referred literature algorithm configuration line work allows researchers compare algorithms according metric expected performance problem domain rather analysis instances occur infrequently application domain algorithm comparison could uninformative misleading approach algorithm configuration via framework wherein application domain modeled distribution problem instances fix infinite class approximation algorithms problem design computationally efficient sample efficient algorithms learn approximation algorithm best performance distribution therefore algorithm high performance specific application domain gupta roughgarden introduced learning framework theory community primary model algorithm configuration portfolio selection artificial intelligence community decades led breakthroughs diverse fields including combinatorial auctions scientific computing vehicle routing sat framework study two important infinite algorithm classes first analyze approximation algorithms based semidefinite programming sdp relaxations randomized rounding procedures used approximate integer quadratic programs iqps algorithms used find nearly optimal solution variety combinatorial partitioning problems including seminal max problems second study agglomerative clustering algorithms followed dynamic programming step extract good clustering techniques widely used machine learning across many scientific disciplines data analysis begin concrete problem description problem description learning framework fix computational problem clustering assume exists unknown distribution set problem instances denote upper bound size problem instances support example support might set social networks individuals researcher goal choose algorithm perform series clustering analyses next fix class algorithms given cost function cost learner goal find algorithm approximately optimizes expected cost respect distribution formalized definition learning algorithm algorithm class respect cost function cost every distribution probability least choice sample outputs algorithm cost cost require number samples polynomial upper bound size problem instances support say computationally efficient running time also polynomial derive guarantees analyzing algorithm classes study see section use structure problem provide efficient algorithms classes study methods integer quadratic programming many problems correlation clustering represented integer quadratic program iqp following form input matrix nonnegative diagonal entries output binary assignment variable set maximizes aij formulation diagonal entries allowed negative ratio semidefinite relaxation integral optimum become arbitrarily large restrict domain matrices nonnegative diagonal entries iqps appear frequently machine learning applications map inference image segmentation correspondence problems computer vision important iqp problem applications machine learning include community detection variational methods graphical models learning seminal algorithm textbook example semidefinite programming also arises many scientific domains circuit design computational biology best approximation algorithms iqps relax problem anpsdp input matrix output set unit vectors maximizing aij hui final step transform round set vectors assignment binary variables assignment corresponds feasible solution original iqp infinitely many rounding techniques choose many randomized algorithms make class random projection randomized rounding algorithms general framework introduced algorithms known perform well theory practice integer quadratic program formulation problem class algorithms contain groundbreaking algorithm achieves approximation ratio assuming unique games conjecture approximation optimal within additive constant generally matrix nonnegative diagonal entries exists algorithm achieves approximation ratio log worst case ratio tight finally positive exists algorithm achieves approximation ratio analyze several classes rounding function classes including outward rotation rounding functions class derive bounds number samples needed learn approximately optimal rounding function respect underlying distribution problem instances using also provide computationally efficient sample efficient learning algorithm learning approximately optimal outward rotation rounding function expectation note results also apply class algorithms first step find set vectors unit sphere necessarily sdp embedding round vectors binary solution generalization led faster approximation algorithms strong empirical performance clustering agglomerative algorithms dynamic programming given set datapoints pairwise distances high level goal clustering partition points groups distances within group minimized distances group maximized classic way accomplish task use objective function common clustering objective functions include define later focus general problem learner main goal minimize abstract cost function cluster purity clustering objective function case many clustering applications clustering biological data study infinite classes clustering algorithms consisting step dynamic programming step first algorithm runs one infinite number routines construct hierarchical tree clusters next algorithm runs dynamic programming procedure find pruning tree minimizes one infinite number clustering objectives example clustering objective objective dynamic programming step return optimal pruning cluster tree procedure consider several parameterized agglomerative procedures induce spectrum algorithms interpolating popular procedures prevalent practice known perform nearly optimally many settings dynamic programming step study infinite class objectives include standard objectives common applications information retrieval show learn best agglomerative algorithm pruning objective function pair thus extending work multiparameter algorithms provide tight bounds ranging log simpler algorithm classes complex algorithm classes learning algorithms sample efficient key challenges one key challenges analyzing algorithm classes study must develop deep insights changes algorithm parameters affect solution algorithm returns arbitrary input example clustering analysis cost function could objective function even distance clustering range algorithm parameters alter merge step tuning intricate measurement overall similarity two point sets alter pruning step adjusting way combinatorially complex cluster tree pruned cost returned clustering may vary unpredictably similarly integer quadratic programming variable flips positive negative large number summands iqp objective also flip signs nevertheless show scenarios take advantage structure problems develop learning algorithms bound way algorithm analyses require care standard complexity derivations commonly found machine learning contexts typically function classes used machine learning linear separators smooth curves euclidean spaces simple mapping parameters specific hypothesis prediction given example close connection distance parameter space two parameter vectors distance function space associated hypotheses roughly speaking necessary understand connection order determine many significantly different hypotheses full range parameters due inherent complexity classes consider connecting parameter space space approximation algorithms associated costs requires much delicate analysis indeed key technical part work involves understanding connection perspective fact structure discover analyses allows develop many computationally efficient algorithm configuration due related concept shattering constrained log often implies small search space log uncover nearly optimal configuration bolster theory algorithm configuration studying algorithms problems ubiquitous machine learning optimization integer quadratic programming clustering paper develop techniques analyzing randomized algorithms whereas algorithms analyzed previous work deterministic also provide first lower bounds line work require involved analysis algorithm family performance carefully constructed instances lower bounds somewhat counterintuitive since several classes study order log even corresponding classes algorithms defined single parameter preliminaries definitions section provide definition context algorithm classes consider class algorithms class problem instances let cost function cost denote abstract cost running algorithm problem instance similarly define function class cost recall finite subset problem instances shattered function class exist witnesses subsets exists function cost words algorithm cost define algorithm class dim cardinality largest subset shattered bounding dim clearly derive sample complexity guarantees context algorithm classes every distribution every every dim log suitable constant independent parameters probability least samples cost cost every algorithm therefore learning algorithm receives input sufficiently large set samples returns algorithm performs best sample guaranteed algorithm close optimal respect underlying distribution methods integer quadratic programming section study several iqp approximation algorithms classes consist sdp rounding algorithms generalization seminal algorithm prove possible learn optimal algorithm fixed class specific application domain many classes study learning procedure computationally efficient sample efficient focus integer quadratic programs form aij input matrix nonnegative diagonal entries output assignment binary variables maximizing sum specifically variable set either problem also known maxqp algorithms best approximation guarantees use sdp relaxation sdp relaxation form maximize aij hui subject given set vectors must decide represent assignment binary variables algorithm vectors projected onto random vector drawn gaussian next directed distance resulting projection greater corresponding binary variable set otherwise set cases algorithm improved upon probabilistically assigning binary variable final rounding step rounding function used specify variable set probability probability see algorithm pseudocode known random algorithm sdp rounding algorithm rounding function input matrix solve sdp optimal embedding choose random vector according gaussian distribution define fractional assignment output projection randomized rounding algorithm named seminal work randomized assignment produced algorithm called fractional assignment based output derive proper assignment variables set probability probability section analyze class round functions problem proved maximium cut graph large approximation ratio ratio possible using rounding function example proved optimal cut contains fraction edges ratio least optimal choice depends graph give efficient algorithm learn nearly optimal value expectation distribution problem instances appendix consider rounding functions including rounding functions outward rotation algorithms rounding functions include classes outward rotation functions rounding function parameterized follows figure graph function goal devise algorithm lslin best rounding function respect distribution maxqp problem instances specifically let expected value solution returned using rounding function evaluated maxqp problem instance defined matrix instantiate cost function cost cost take negative since goal find parameter maximize value expectation minimizing cost amounts maximizing require lslin returns value probability least maximizes one might expect first step would bound class pdim set matrices nonnegative diagonal entries upper bound range restricted support distribution problem instances pursue alternative route provides simpler sample complexity algorithmic analysis instead bound pseudodimension class hslin slins slins value fractional assignment produced projecting sdp embedding onto rounding directed distances function explicpof projections multiplied using rounding itly slins aij hui hui notice slins slins denotes standard gaussian distribution prove tight bounds hslin derive generalization guarantees algorithm class ultimately care follows omitted proofs found appendix theorem suppose lslin algorithm takes input samples returns parameter maximizes slins supposepthat sufficiently large ensure probability least slins slins lslin class rounding functions respect cost function show pdim hslin log present algorithm computationally efficient sample efficient algorithm often fix tuple consider slins function denote slina begin helpful lemma lemma function slina made piecewise components form moreover border two components falls must optimal sdp embedding optimal embedding may write proof first let slina aij hui hui specific form depends solely whether course disregard possibility respectively long sign depends sign grows point therefore order set real values long falls two consecutive elements ordering form slina fixed particular summand either constant constant multiplied constant multiplied perhaps accompanied additive constant term means may partition positive real line intervals form slin fixed quadratic function claimed lemma allows prove following bound pdim hslin lemma pdim hslin log lemma follows lemmas prove pdim hslin log pdim hslin log lemma pdim hslin log proof prove upper bound showing set size shatterable log means largest shatterable set must size log pseudodimension hslin log arrive bound fixing tuple analyzing slina particular make use lemma know slin composed piecewise quadratic components therefore witness corresponding element partition positive real line intervals intervals sample witness intervals sample witness intervals sample witness merging intervals samples witnesses intervals samples respective witnesses figure partitioning intervals given set tuples witnesses within interval slin always greater lesser slina always either less witness greater varies one fixed interval constant term comes fact single continuous quadratic component slin function may equal twice three subintervals function less greater consists tuples corresponds partition positive real line merge partitions shown figure simple algebra shows left intervals slina always either less witness greater varies one fixed interval words one interval binary labeling defined whether sample less greater witness fixed means shatterable values induce binary labelings must come distinct intervals therefore log lemma pdim hslin log proof sketch order prove pseudo dimension hslin least log present set log graphs projection vectors shattered hslin words exist witnesses values snc exists slinst slinst build use graph vary set graph composed disjoint copies via careful choice vectors witnesses pick critical values call sling switches witness every element critical values meanwhile sling switches witness half often sling similarly sling switches witness half often sling therefore achieve every binary labeling using functions slins shattered lower bound particularly strong holds family positive semidefinite matrices rather general family matrices prove learning algorithm algorithm correct computationally efficient sample efficient prove algorithm class rounding functions first prove algorithm ideed outputs empirically optimal value algorithm algorithm finding empirical value maximizing rounding function input sample solve sdp embedding let set values exists pair indices let value maximizes slina let value maximizes slina output lemma algorithm produces value maximizes slins input sample algorithm running time polynomial proof first define slina claim algorithm maximizes lemma proved function slina made piecewise components form therefore made piecewise components form well moreover lemma border two components falls must optimal sdp embedding thresholds computed step algorithm therefore increase starting fixed quadratic function thresholds simple find optimal value pair consecutive step thresholds step value maximizing slins global optimum ready prove main result drawn algotheorem given sample size log log rithm class rounding functions respect cost function computationally efficient proof let sample size lemma prove algorithm input returns value maximizes slins polynomial time pseudodimension bound lemma log log samples probability least slins slins lemma thus algorithm best function respect agglomerative algorithms dynamic programming begin overview agglomerative algorithms dynamic programming include many clustering algorithms define several parameterized classes algorithms previous section prove possible learn optimal algorithm fixed class specific application many classes analyze procedure computationally efficient sample efficient focus agglomerative algorithms dynamic programming clustering problems clustering instance consists set points distance metric specifying pairwise distances points overall goal clustering partition points groups distances within group minimized distances group maximized clustering typically performed using objective function distance ground truth clustering scenario discuss detail section formally objective function takes input set points call centers well partition call clustering define rich class clustering objectives objective functions next define agglomerative clustering algorithms dynamic programming prevalent practice enjoy strong theoretical guarantees variety settings examples algorithms include popular averagelinkage algorithms dynamic programming agglomerative clustering algorithm dynamic programming defined two functions merge function pruning function merge function defines distance two sets points algorithm builds cluster tree starting singleton leaf nodes iteratively merging two sets minimum distance single node remaining consisting set children node tree correspond two sets points merged form sequence merges common choices merge function include single linkage average linkage complete linkage pruning function takes input subtree returns score pruning subtree partition points contained root clusters cluster internal node pruning functions may similar objective functions though input subtree objectives standard pruning functions algorithm returns tree optimal according found polynomial time using dynamic programming algorithm details merge function pruning function work together form agglomerative clustering algorithm dynamic programming dynamic programming step find node need find best center recursively find best considering different combinations best left child best right child choosing best combination pictorially figure depicts array available choices designing agglomerative clustering algorithm dynamic programming path chart corresponds alternative choice merging function pruning function algorithm designer goal determine path optimal specific application domain section analyze several classes algorithms merge function comes infinite family functions pruning function arbitrary fixed function section expand analysis include algorithms defined infinite family pruning functions conjunction family merge functions results hold even fixed preprocessing step precedes agglomerative merge step long independent several papers provide theoretical guarantees clustering family objective functions values instance see gupta tangwongsan work provides approximation algorithm log bateni work studies distributed clustering algorithms algorithm agglomerative algorithm dynamic programming input clustering instance merge function pruning function agglomerative merge step build cluster tree according start singleton sets iteratively merge two sets minimize single set remains let denote cluster tree corresponding sequence merges dynamic programming find minimizing node find best subtree rooted denoted according following dynamic programming recursion ctl ctr ctl ctr otherwise denote left right children respectively output best root node troot figure schematic class agglomerative clustering algorithms dynamic programming therefore analysis carries algorithms merge functions define three infinite families merge functions provide sample complexity bounds families fixed arbitrary pruning function families consist merge functions depend minimum maximum pairwise distances second family denoted richer class depends pairwise distances classes parameterized single value min max min max define merge function defined define spectra merge functions ranging defines richer spectrum includes addition given pruning function denote algorithm builds cluster tree using prunes tree according reduce notation clear context often refer algorithm set algorithms example cost function always set minimize objective pruning function clear context recall given class merge functions cost function generic clustering objective goal learn value expectation unknown distribution clustering instances one might wonder optimal across instances would preclude need learning algorithm theorem prove case given exists distribution clustering instances best algorithm respect crucially means even algorithm designer sets typical practice optimal choice tunable parameter could real value optimal value depends underlying unknown distribution must learned matter value formally describe result set notation similar section let denote set clustering instances points slight abuse notation use denote abstract cost clustering produced instance theorem permissible value iof ahi exists distribution clustering instances permissible values omitted proofs section see appendix arbitrary objective function arbitrary pruning function analyze complexity classes drop subscript hai objective function clear context furthermore analysis often fix tuple use notation analyze changes function start theorem objective functions pdim log pdim pruning functions pdim log objective functions log pdim log theorem follows lemma lemma begin following structural lemma help prove lemma lemma made piecewise constant components proof sketch note clustering returned associated cost identical algorithms construct merge tree range across observe run algorithm values expect produce different merge trees answer suppose point run algorithm two pairs subsets could potentially merge exist eight points decision pair merge depends sign using consequence rolle theorem provide appendix show sign expression function flips four times across since merge decision defined eight points iterating follows identify unique points correspond value decision flips means divide intervals merge tree therefore output fixed appendix show corresponding statement lemma lemmas allow upper bound log manner similar lemma prove upper bound class sdp rounding algorithms thus obtain following lemma lemma pdim log pdim log next give lower bounds two classes lemma objective function pdim log pdim log proof sketch give general proof outline applies classes let construct set log clustering instances shattered possible labelings set need show choices labelings achievable crux proof lies showing given sequence recall although popular choices algorithm designer use objective functions distance ground truth clustering discuss section algorithm algorithm finding empirical cost minimizing algorithm input sample let sample ordered set points solve solution exists following equation add solutions order elements set pick arbitrary interval run clustering instances compute let value minimizes output possible design instance points choose witness alternates times traverses sequence intervals high level description construction two main points rest points defined groups define distances points initially merges form set merges form set depending whether merges points sets respectively vice versa means values unique behavior merge step finally sfor sets merge sets merge let intervals returns unique partition carefully setting distances cause cost oscillate specified value along intervals upper bound implies computationally efficient sample cient learning algorithm see algorithm first know samples sufficient optimal algorithm next consequence lemmas range feasible values partitioned intervals output fixed entire set samples given interval moreover intervals easy compute therefore learning algorithm iterate set intervals interval choose arbitrary compute average cost evaluated samples algorithm outputs minimizes average cost theorem let clustering objective pruning function computable log log value nomial time given input sample size algorithm class respect cost function computationally efficient proof algorithm finds empirically best solving discontinuities evaluating function corresponding intervals guaranteed constant lemmas therefore pick arbitrary within interval evaluate empirical cost samples find empirically best done polynomial time polynomially many intervals runtime given instance polynomial time follows theorem samples sufficient algorithm optimal algorithm turn obtain following bounds theorem objective functions pdim objective functions pruning functions pdim theorem follows lemmas lemma objective functions pruning functions pdim proof recall proof lemma interested studying merge trees constructed changes instances increase proof lemma fix instance consider two pairs sets could potentially merged decision merge one pair determined sign expression first note expression terms consequence rolle prove appendix roots theorem therefore iterate possible pairs determine unique expressions values corresponding decision flips thus divide intervals output fixed fact suppose shatterable set size witnesses divide intervals fixed therefore corresponding labeling according whether fixed well means achieve labelings least shatterable set lemma objective functions pdim proof sketch crux proof show exists clustering instance points witness set oscillates along sequence intervals finish proof manner similar lemma constructing instances fewer oscillations construct first define two pairs points merge together regardless value call merged pairs next define sequence points distances set merges involving points sequence occur one particular merges one merges therefore potentially distinct merge trees created using induction precisely set distances show distinct values corresponding unique merge tree thus enabling achieve possible merge tree behaviors finally carefully add points instance control oscillation cost function intervals desired appendix prove results assuming natural restriction instance space particular show drastically reduced number unique distances problem instance large appendix analyze classes algorithms interpolate summary results algorithm classes found table dynamic programming pruning functions previous section analyzed several classes merge functions assuming fixed pruning function dynamic programming step standard clustering algorithm step algorithm section analyze infinite class dynamic programming pruning functions derive comprehensive sample complexity guarantees learning best merge function pruning function conjunction allowing choice pruning function significantly generalize standard clustering algorithm framework recall algorithm selection model instantiated cost function generic clustering objective standard clustering algorithm framework defined general include objectives like best choice pruning function algorithm selector would return optimal pruning cluster tree instantiation cost however goal algorithm selector example provide solutions close ground truth clustering problem instance best choice pruning function obvious case assume learning algorithm training data consists clustering instances labeled expert according ground truth clustering example ground truth clustering might partition set images based subject partition set proteins function fresh input data longer access expert ground truth hope prune cluster tree based distance ground instead algorithm selector must empirically evaluate well pruning according alternative objective functions approximate ground truth clustering labeled training data way instantiate cost distance clustering ground truth clustering guarantee empirically best pruning function class computable objectives expectation new problem instances drawn distribution training data crucially able make guarantee even though possible compute cost algorithm output fresh instances ground truth clustering unknown along lines also handle case training data consists clustering instances clustered according objective function compute scenario learning algorithm returns pruning objective function efficiently computable best approximates objective training data therefore best approximate objective future data hence section analyze richer class algorithms defined class merge functions class pruning functions learner learn best combination merge pruning functions class define general class agglomerative clustering algorithms let denote generic class merge functions classes defined sections parameterized also define rich class clustering objectives dynamic programming step takes input partition points set centers function defined distance ground truth clustering directly measured clustering algorithm used new data however assume learning algorithm access training data consists clustering instances labeled ground truth clustering learning algorithm uses data optimize parameters defining clustering algorithm family high probability new input drawn distribution training data clustering algorithm return clustering close unknown ground truth clustering figure cluster tree corresponding table clusters centers clusters centers clusters centers table example dynamic programming table corresponding cluster tree figure note definition identical use different notation confuse dynamic programming function clustering objective function let denote merge function denote pruning function earlier abstract objective bounded pseudodimension denoted cost clustering produced building cluster tree using merge function pruning tree using fixed pruning function interested algorithms form uses merge function build cluster tree use pruning function prune analyze resulting class algorithms denote bound pseudodimension recall order show pseudodimension upper bounded dha proved given sample clustering instances nodes split real line dha intervals ranges single interval cluster trees returned merge function fixed extend analysis first prove similar fact lemma namely given single cluster tree split real line fixed number intervals ranges single interval pruning returned using function fixed show theorem combine analysis rich class dynamic programming algorithms previous analysis possible merge functions obtain comprehensive analysis agglomerative algorithms dynamic programming visualize dynamic programming step algorithm pruning function using table table corresponds cluster tree figure row table corresponds value column corresponds node corresponding cluster tree column corresponding node row corresponding value fill cell partition clusters corresponds best subtree rooted defined step algorithm figure partition positive real line based whether ranges lemma given cluster tree clustering instance points positive real line partitioned set intervals cluster tree pruning according identical proof prove claim examine dynamic programming table corresponding given cluster tree pruning function ranges positive real line theorem implies show split positive real line set intervals fixed interval ranges table corresponding cluster tree invariant matter choose table identical therefore resulting clustering identical output clustering cell table since corresponds best node containing points see table example prove total number intervals bounded prove lemma using induction row number table tive following positive real line partitioned set hypothesis intervals ranges first rows table corresponding notice means positive real line invariant intervals partitioned set ranges table corresponding invariant therefore resulting output clustering invariant well base case let positive real number consider first row table corresponding recall column table corresponds node clustering tree first row table column corresponding node fill cell single node point minimizes thing might change vary center minimizing objective let two points point better candidate center means words equation zeros intervals partition positive real line ranges whether fixed example see figure every pair points similarly partitions positive real line intervals merge partitions one partition pair points left intervals partitioning positive real line ranges point minimizes fixed since arbitrary thus partition real line node cluster tree partition defines center cluster ranges pover line merge partition every node left intervals ranges one interval centers nodes cluster tree fixed words point minimizes fixed course means first row table fixed well therefore inductive hypothesis holds base case inductive step consider row table inductive hypothesis positive real line partitioned set intervals ranges first rows table corresponding invariant fix interval let node cluster tree let left right children respectively notice pruning belongs cell ith row column corresponding depend cells ith row cells rows particular pruning belongs cell depends inequalities defining minimizes ctl ctr ctl ctr examine objective function show minimizing therefore optimal pruning changes small number times ranges arbitrary since strictly less best ctr ctr exactly entry row table column corresponding similarly best ctr ctr exactly entry row table column corresponding crucially entries change vary thanks inductive hypothesis therefore know corresponding combination best best pruning fixed denoted similarly corresponding combination best best pruning fixed denoted better pruning order analyze inequality let consider equivalent inequality expand expression let similarly inequality written equation zeros ranges therefore subintervals partitioning ranges one subinterval smaller fixed words ranges one subinterval either combination best left child best right child better combination best left child best right child vice versa pairs similarly partition subintervals defining better two prunings merge partitions total subintervals ranges single subinterval ctl ctr ctl ctr fixed since equations determine entry ith row table column corresponding node entry also fixed ranges single subinterval partition corresponds single cell row table considering row table whole must fill entries since columns table column corresponding partition ranges single subinterval partition entry row column fixed merge partitions left partition consisting intervals ranges single interval entry every column row fixed intervals subsets assumption first rows table also fixed therefore first rows fixed recap fixed interval ranges first rows table fixed inductive hypothesis intervals showed partitioned intervals one subinterval first rows table fixed therefore total intervals ranges single interval first rows table fixed aggregating analysis rows table intervals entire table fixed long ranges single interval ready prove main theorem section theorem given clustering instances suppose dha intervals partition domain ranges single interval cluster trees returned merge function fixed sets samples dim dha log proof let set clustering instances fix single interval shown along horizontal axis figure set cluster trees returned merge function fixed across samples know lemma split real line fixed number intervals ranges single interval shown along vertical axis figure dynamic programming table fixed samples therefore resulting set clusterings fixed particular fixed interval samples intervals merge left intervals ranges single interval table sample fixed therefore resulting clustering sample fixed since intervals inducing intervals total cells one fixed cell resulting clustering across samples dha fixed shatters must means log dha log intervals fixed interval fixed set pruned cluster trees samples cell fixed set cluster trees samples interval dha intervals figure illustration partition parameter space described proof theorem theorem given sample size dha log log clustering objective possible class algorithms respect cost function moreover procedure efficient following conditions hold constant ensures partition values polynomial polynomial ensures partition values polynomial possible efficiently compute partition intervals single interval cluster trees returned performed fixed proof technique finding empirically best algorithm follows naturally lemma partition range feasible values described section resulting interval find fixed set cluster trees samples partition values discussed proof lemma interval use prune trees determine fixed empirical cost corresponding interval illustrated figure iterating partitions parameter space find parameters best empirical cost theorem use lemma show pdim dha log thus arrive sample complexity bound constant discussion open questions work show learn algorithms several infinite rich classes sdp rounding algorithms agglomerative clustering algorithms dynamic programming provide computationally efficient sample efficient learning algorithms many problems push boundaries learning theory developing techniques compute intricate classes iqp approximation algorithms clustering algorithms derive tight bounds classes study lead strong sample complexity guarantees hope techniques lead theoretical guarantees areas empirical methods algorithm configuration developed many open avenues future research area work focused algorithm families containing computationally efficient algorithms however oftentimes empirical research algorithm families question contain procedures slow run completion many training instances situation would able determine exact empirical cost algorithm training set could still make strong provable guarantees algorithm configuration scenario work also leaves open potential bounds distributions clustering instances satisfying form stability approximation stability perturbation resilience acknowledgments work supported part grants nsf nsf sloan fellowship microsoft research fellowship nsf graduate research fellowship microsoft research women fellowship national defense science engineering graduate ndseg fellowship thank sanjoy dasgupta anupam gupta ryan donnell useful discussions references noga alon konstantin makarychev yury makarychev assaf naor quadratic forms graphs inventiones mathematicae martin anthony peter bartlett neural network learning theoretical foundations cambridge university press pranjal awasthi balcan konstantin voevodski local algorithms interactive clustering proceedings international conference machine learning icml pages pranjal awasthi avrim blum sheffet clustering perturbation stability information processing letters balcan nika haghtalab colin white clustering perturbation resilience proceedings annual international colloquium automata languages programming icalp balcan yingyu liang clustering perturbation resilience siam journal computing afonso bandeira nicolas boumal vladislav voroninski approach semidefinite programs arising synchronization community detection proceedings conference learning theory colt pages mohammadhossein bateni aditya bhaskara silvio lattanzi vahab mirrokni distributed balanced clustering via mapping coresets proceedings annual conference neural information processing systems nips pages ahron arkadi nemirovski lectures modern convex optimization analysis algorithms engineering applications volume siam william brendel sinisa todorovic segmentation independent set proceedings annual conference neural information processing systems nips pages fazli incremental clustering dynamic information processing acm transactions information systems tois yves caseau laburthe glenn silverstein factory vehicle routing problems international conference principles practice constraint programming pages springer moses charikar chandra chekuri feder rajeev motwani incremental clustering dynamic information retrieval proceedings annual symposium theory computing stoc pages moses charikar anthony wirth maximizing quadratic programs extending grothendieck inequality proceedings annual symposium foundations computer science focs pages timothee cour praveen srinivasan jianbo shi balanced graph matching proceedings annual conference neural information processing systems nips pages jim demmel jack dongarra victor eijkhout erika fuentes antoine petitet rich vuduc clint whaley katherine yelick linear algebra algorithms software proceedings ieee dudley sizes compact subsets hilbert space continuity gaussian processes journal functional analysis uriel feige michael langberg rounding technique semidefinite programs journal algorithms darya filippova aashish gadani carl kingsford coral integrated suite visualizations comparing clusterings bmc bioinformatics roy frostig sida wang percy liang christopher manning simple map inference via relaxations proceedings annual conference neural information processing systems nips pages michel goemans david williamson improved approximation algorithms maximum cut satisfiability problems using semidefinite programming journal acm jacm anna grosswendt heiko roeglin improved analysis complete linkage clustering european symposium algorithms volume pages springer anupam gupta kanat tangwongsan simpler analyses local search algorithms facility location arxiv preprint rishi gupta tim roughgarden pac approach algorithm selection proceedings acm conference innovations theoretical computer science itcs pages qixing huang yuxin chen leonidas guibas scalable semidefinite relaxation maximum posterior estimation proceedings international conference machine learning icml pages fredrik johansson ankani chattoraj chiranjib bhattacharyya devdatt dubhashi weighted theta functions embeddings applications clustering summarization proceedings annual conference neural information processing systems nips pages subhash khot guy kindler elchanan mossel ryan donnell optimal inapproximability results csps siam journal computing kevin eugene nudelman yoav shoham empirical hardness models methodology case study combinatorial auctions journal acm jacm marina comparing clusterings information based distance journal multivariate analysis ryan donnell optimal sdp algorithm equally optimal long code tests proceedings annual symposium theory computing stoc pages david pollard convergence stochastic processes david pollard empirical processes institute mathematical statistics john rice algorithm selection problem advances computers andrej risteski yuanzhi approximate maximum entropy principles via goemanswilliamson applications provable variational methods proceedings annual conference neural information processing systems nips pages mehreen saeed onaiza maqbool haroon atique babri syed zahoor hassan mansoor sarwar software clustering techniques use combined algorithm proceedings european conference software maintenance reengineering pages ieee sagi snir satish rao using max cut enhance rooted trees consistency transactions computational biology bioinformatics tcbb timo tossavainen zeros finite sums exponential functions australian mathematical society gazette vijay vazirani approximation algorithms springer science business media jun wang tony jebara chang learning using greedy journal machine learning research mar james white saket navlakha niranjan nagarajan ghodsi carl kingsford mihai pop alignment clustering phylogenetic microbial diversity studies bmc bioinformatics david williamson david shmoys design approximation algorithms cambridge university press lin frank hutter holger hoos kevin satzilla algorithm selection sat journal artificial intelligence research june chihiro yoshimura masanao yamaoka masato hayashi takuya okuyama hidetaka aoki kawarabayashi hiroyuki mizuno uncertain behaviours integrated circuits improve computational performance scientific reports mingjun zhong nigel goddard charles sutton signal aggregate constraints additive factorial hmms application energy disaggregation proceedings annual conference neural information processing systems nips pages uri zwick outward rotations tool rounding solutions semidefinite programming relaxations applications max cut problems proceedings annual symposium theory computing stoc pages proofs section theorem suppose lslin algorithm takes input samples returns parameter maximizes slins supposepthat sufficiently large ensure probability least slins slins lslin class rounding functions respect cost function proof theorem follows directly lemma lemma suppose sufficiently large ensure probability least draw samples functions slins slins maximizes probability least maximizes slins slins proof notice since product distribution slins slins slins know probability least functions slins slins assumption know probability least means lemma pdim hslin log proof order prove pseudo dimension hslin least log must present set log graphs projection vectors shattered hslin words exist witnesses values snc exists slinst slinst build use graph vary set graph composed disjoint copies simple calculation confirms optimal sdp embedding therefore optimal embedding set vectors sdp elements sdp define set vectors first set vector words concatenation vector next defined even powers otherwise entries similar vein pin pattern set entries form otherwise entries set following positive increasing constants appear throughout remaining analysis figure depiction sling increases black dot means sling white dot means sling also set claim witnesses sufficient prove set shatterable spend remainder proof showing true domain sling split intervals simple fixed form intervals begin form straightforward matter calculations check sling note power pattern chosen intervals well defined since call following increasing sequence numbers points interest use prove set shattered make two claims points interest sling witness whenever witness whenever let consider sling points interest per interval first half points interest sling greater witness second half sling less witness claims illustrated dots figure together claims imply shattered vector exists point interest slins induces binary labeling first claim true sling increasing function minimized sling sling always least witness sling increasing function limit therefore sling always less witness may conclude first claim always true second claim notice points interest per interval claimed first two points interest fall interval sling decreasing therefore minimized sling simple calculations show sling increasing function minimized sling desired remaining points interest fall interval sling segment function negative derivative form decreasing points interest weh already considered make half points interest interval therefore need show equals sling less witness saw sling decreasing segment enough show sling less witness end sling increasing function limit therefore equals sling less witness finally since sling decreasing interval must check point interest sling greater witness point interest sling less witness end sling function increasing minimized sling therefore sling next sling increasing function limit tends toward infinity therefore sling second claim holds algorithm classes maxqp functions class rounding functions finite yet rich class functions paradigm introduced donnell tool characterizing sdp gap curve problem define shortly order describe donnell guarantees using rounding functions however first define sdp value graph wij sdp max denotes suppose graph sdp value sdp sdp gap curve gapsdp function measures smallest optimal value among graphs sdp words given sdp guaranteed optimal value least gapsdp formally definition call pair sdp gap exists graph sdp opt define sdp gap curve gapsdp inf sdp gap donnell prove graph sdp one runs iteratively rounding functions high probability least one result cut value gapsdp formally state definition rounding function well donnell algorithm guarantees definition given let denote partition intervals smallest integer multiple exceeding say function following hold identically identically values finite intervals set note functions theorem corollary algorithm given graph sdp runs time poly high probability outputs proper cut value least gapsdp namely algorithm alluded theorem takes input graph runs using rounding functions returns cut maximum value define value resulting cut finite function class log immediately implies following theorem theorem given input sample size log exists algorithm class rounding functions respect cost function outward rotations next study class outward rotation based algorithms proposed zwick maxcut problem outward rotations proven work better random hyperplane technique goemans williamson graphs light constitute large proportion edges stated earlier though feige langberg later showed exists class rounding functions becomes equivalent outward rotations analyze class originally presented zwick class outward rotation algorithms characterized angle varying results range algorithms random hyperplane technique goemans williamson naive approach outputting random binary assignment unlike output binary assignment outward rotation algorithm essence extends optimal sdp embedding way done understood follows original embedding first carried first space remaining set zero suppose orthonormal vectors along last embedding rotated original space towards angle performing outward rotations new embedding projected onto random hyperplane binary assignment defined deterministically based sign projections like algorithm intuitively parameter determines far sdp embedding used determine final projection arbitrary value drawn normal distribution contributed formally define class algorithm sdp rounding algorithm using rotation input matrix solve sdp optimal embedding define new embedding first correspond cos following set except set sin choose random vector according gaussian distribution decision variable assign sgn output set notations similar section let value binary assignment produced projecting sdp embedding onto rotating outwardly aij sgn sgn use denote expected value sampled normal distribution easily seen fact similar lemma appendix apply therefore use samples form analysis let howr first prove section howr log next section present efficient learning algorithm class outward rotation based algorithms show upper bound class outward rotation based algorithms following discussion use notation owra order examine value changes function fixed theorem pdim howr log proof suppose shatterable means exist thresholds exists parameter owra claim sample owra piecewise constant function values discontinuous true means exists values within given interval owra identical label given witness therefore values define intervals labels given witnesses set samples identical within interval distinct labelings achievable choice witnesses however since shatterable need thus log need prove claim owra given observe increases owra change sgn changes note hui cos sin projection first coordindates clearly monotone function attains zero hui tan implies sgn changes within therefore owra piecewise constant function discontinuities learning algorithm present algorithm efficiently learns best value outward rotation respect samples drawn lemma algorithm produces value maximizes owra given sample algorithm running time polynomial proof recall proof theorem defines intervals within behavior constant across samples therefore need examine performance single value within interval exhaustively evaluate possibilities single best one also observe since values step since computing binary assignment set instances particular value takes polynomial time step also take polynomial time algorithm algorithm finding empirical value maximizing input sample solve optimal sdp embeddings let set values exists pair indices tan let argmax output together theorem theorem lemma implies following theorem theorem given input sample size log log log drawn algorithm class outward rotation algorithm respect cost function general analysis algorithms sections investigated two specific classes algorithms present analysis applied wide range classes algorithms including functions outward rotations particular show classes rounding functions log first step towards goal define means class rounding functions section next order generalize simplify sample complexity analysis provide alternative randomized procedure call randomized projection randomized thresholding rprt produces binary assignment rather fractional assignment section prove rprt equivalent showing expectation assignment produced rprt value assignment produced arbitrary problem instance work rprt framework bound sample complexity required best rounding function fixed class either rprt prove showing rprt rounding function log section finally section present algorithm best rounding function fixed class generic class rounding functions say class functions functions class parameterized single constant exists baseline function every function clearly representation rich enough encompass classes common functions including functions linear function order class functions qualify sdp rounding functions additionally require function limit approaches limit approaches particular definition class functions consists rounding functions exists baseline function lim lim function write define rprt make following observation functions useful algorithm design sample complexity analysis particular observe cumulative density function associated probability distribution limits tends respectively denote probability density function associated randomized projection randomized thresholding rprt differs primarily produces binary assignment rather fractional assignment essence rprt simply samples binary assignment distribution binary assignments effectively outputs like rprt first projects embedding iqp instance onto random vector however rprt draws random threshold variable assigns variable either depending whether directed distance projection multiplied less greater threshold distribution thresholds picked designed mimic expectation algorithm sdp rounding algorithm rounding function input matrix solve sdp optimal embedding choose random vector according gaussian distribution draw probability density function corresponding decision variable output assignment sgn shui output show expected value binary assignment produced rprt equal given rounding function define rprts deterministic value binary assignment produced rprt using rounding function given values similarly define rprts expected value binary assignment produced rprt given value words rprts rprts finally define rprts expected value binary assignment produced rprt value binary assignment produced words rprts rprts theorem given maxqp problem input matrix rounding function expected value binary assignment produced rprt equals value fractional assignment produced rprts proof let denote binary fractional assignments produced rprt given respectively claim prove implies rprts prove make use fact given hence given projection hui independent random variables therefore expectation random draws written product individual expected values rprts aij aij aij rprts clear rprts sampled distribution algorithms need show given given variable expected binary assignment randomized thresholding step equal fractional assignment variable let examine expected binary assignment rprt particular variable probability assigned rqi shui since cumulative density function equal shui however due way defined value fact equal hui complete proof noting hui precisely fractional assignment choice rounding function taking advantage equivalence rprt given class sigmoidlike functions show learning algorithm best rounding functions respect rprt also best rounding functions respect recall section analysis incorporated sample set vein also incorporate sample words provide learning algorithm set samples order prove indeed works state theorem lemma parallel theorem lemma respectively theorem suppose class rounding functions let hrprt rprts let dhrprt hrprt suppose lrprt algorithm takes input samples dhrprt log log returns parameter maximizes rprts lrprt class rounding functions respect cost function computationally efficient proof theorem follows following lemma similar lemma lemma suppose sufficiently large ensure probability least draw samples functions rprts rprts maxthen least maximizes rprts probability imizes proof lemma since product distribution rprts rprts theorem know rprts implies rprts hence restate assumption theorem statement follows probability least functions rprts since true say probability definition combining previous three inequalities get general class rprt algorithms main result section tight bound general class rprt algorithms theorem let hrprt rprts pdim hrprt log follows lemma lemma provide matching upper lower bounds pdim hrprt lemma pdim hrprt log proof suppose shatterable exist thresholds exists parameters first claim given sample domain parti tioned intervals binary assignment produced rprt sample identical across parameter settings within given interval proved consider partition domain intervals based points know value within given interval intervals labeling induced witnesses therefore possible labelings produced choice witnesses set samples however since picked shatterable instance must log complete proof need examine behavior rprt single sample different configurations observe increase keeping else constant expect rprt produce different binary assignment assigned value changes vertex however happens shg since vertices one value algorithm changes behavior expect algorithm change behavior times increases proves claim prove lower bound hrprt lemma pdim hrprt log proof order prove least log constant must devise set samples size log shattered hrprt means able find witnesses values snc exists rprtst rprtst solution use designed proof lemma identical consists disjoint graphs vertices later pick value increases change cuts order change cuts alternatingly better worse appropriately choosing different values carefully place intervals oscillations occur across samples able shatter sample size log order define make use following increasing sequence defined recursively note also write use notation sake convenience let aik define terms aik follows even define aik aik aik aik odd define aik aik rationale behind choosing value become evident proceed proof first simple calculation confirm directed distance projections vertices lie certain intervals stated even aik aik odd aik aik choice pick value particular choose follows even odd make sense proceed proof analyze rprts varies function show rprts oscillates threshold times oscillations spaced manner pick values shatter first let examine values expect behavior rprts change know consist values equal based fact observe vertices values cut changes lie cki cki thus note intervals cut changes spaced far apart increasing order given analysis henceforth focus values outside intervals claim exists values bmax bmin value cut one values intervals particular value equal max odd equal bmin even simple calculation verified cut assigned varies follows even cki odd cki cut defined hand cut thus make crucial observation crosses interval cki even net value cut whole graph decreases odd increases precisely claim rprts function outside intervals takes one two values bmin bmax state values explicitly let define set variables even odd observe value every odd graph contributes value every even graph cut contributes value total cut denote value bmax value decreases call bmin extend observation follows interval cki bmin even rprts bmax odd need choose bmin bmax complete proof need show oscillations respect witnesses samples spaced manner pick different covering possible oscillations log becomes clear two observations first secondly values induce different labelings ith sample samples turns consider log contains claim intervals induce different labeling respect witnesses see true observe since labeling induced defined terms sequence bki odd however fact binary equivalent since binary equivalent given unique labeling induced unique thus log samples shattered learning algorithm present learning algorithm algorithm best rounding function respect class rounding functions algorithm algorithm finding empirical value maximizing rounding function input sample solve optimal sdp embeddings let set values exists pair indices sqi let argmax rprts output particular prove following guarantee regarding algorithm performance lemma algorithm produces value maximizes given sample algorithm running time polynomial proof correctness algorithm evident proof lemma algorithm identifies values behavior rprt changes proceeds exhaustively evaluate possible binary assignments polynomially many therefore algorithm successfully finds best value polynomial time corollary given input sample size log log log drawn algorithm class rounding functions respect cost function computationally efficient proofs section theorem permissible value iof ahi exists distribution clustering instances permissible values proof give general proof three classes point places proof details different general structure argument value construct single clustering instance desired property distribution merely single clustering instance probability consider permissible value denoted set clustering instance consists two gadgets two clusters class results different first gadget depending whether similarly results different second gadget depending whether ensuring first gadget results lowest cost second gadget results lowest cost ensure optimal parameter overall first gadget follows define five points sake convenience group remaining points four sets containing points set distances follows also define special points distances rest points respectively except two points belong set distances defined terms always set far distances defined therefore trivially satisfy triangle inequality set rest distances maximum distances allowed triangle inequality therefore triangle inequality holds entire metric let analyze merges caused various values regardless values since distances first five points smallest merges occur initially particular regardless merged next simple calculation merges merges denote set containing denote set containing one sets also contain minimum distance subsequent merges except last merge involve distances smaller never need consider merging next smallest distances points merge together similarly point algorithm created six sets claim merge merge maximum distance sets merges whereas minimum distance therefore three values claim holds true next claim cost gadget lowest clusters clearly since distances within much less distances across sets best points distance center proved pruning tree therefore must argue cost lowest idea act good center best center arbitrary point cost first case cost second case center change depending tie best center difference cost whether include cost otherwise cost putting together cost otherwise cost subtracting like terms conclude first case always smaller next construct second gadget arbitrarily far away first gadget second gadget similar first points sets rest distances gadget joins rest argument identical conclusion reach cost second gadget much lower therefore final cost minimized proof complete show structural lemma similar lemma provide full details proof lemma lemma made piecewise constant components proof first note clustering returned associated cost identical algorithms construct merge tree increase observe run algorithm values expect produce different merge trees answer suppose point run algorithm two pairs subsets could potentially merge exist eight points decision pair merge depends whether larger clearly one value expressions equal unless difference expressions zero assuming ties broken arbitrarily consistently implies one choice whether merge identical similarly identical since merge decision defined eight points iterating pairs follows identify unique points correspond value decision flips means divide intervals merge tree therefore output fixed provide details lemma argument structure relied linearity merge equation prove eight points exactly one value use consequence rolle theorem theorem bound values theorem let sum form least one number roots upper bounded lemma made piecewise constant components proof case clustering returned associated cost identical long algorithms construct merge trees objective understand behavior instances particular varies want count number times algorithm outputs different merge tree one instances instance consider two pairs sets potentially merged decision merge one pair determined sign expression determined set points chosen independent theorem sign expression function flips times across since expression defined exactly points iterating pairs list unique expressions correspond values corresponding decision flips thus divide intervals output fixed give full details lemmas lemma pdim log pdim log proof suppose set clustering instances shattered using witnesses must show log value based whether algorithm induces binary labeling lemma know every sample partitions intervals way merging partitions divide intervals therefore labeling induced witnesses fixed similar figure means achieve binary labelings least since shatterable log details identical using lemma lemma objective function pdim log pdim log first prove lemma objective cost denoted later note extended cluster purity based cost first prove following useful statement helps construct general examples desirable properties particular following lemma guarantees given sequence values size possible construct instance cost output function oscillates threshold moves along sequence intervals given powerful guarantee pick appropriate sequences generate figure clustering instance used lemma sample set log instances correspond cost functions oscillate manner helps pick values shatters samples lemma given given sequence exists real valued witness clustering instance proof first focus discuss end proof first order metric set distances triangle inequality trivially satisfied particular following distances pairs points within group set distances follows see figure first merges matter set point singleton set pair points minimum distance metric merge next either merge based following equation merges otherwise merge notice merge expression could small want merge occur ensure subsequent merges maximum distance less merge merge final step long distances ensure merges regardless value since closer furthermore merge opposite set since set merge expression merge opposite set merge expression set set distances follows also set distances distances set distances later fall construction every set merge current superset containing merge expression possible merge value larger similarly sets merge therefore final two sets linkage tree given struction finally set distances ensure cost function oscillates calculate cost different ranges regardless pthe partitions pay distances differ pay cost denote value rlow values change adds cost inequality always true denote rlow rhigh values change cost changes decreasing back rlow general cost rlow rlow cost rlow cost rhigh set rlow rhigh conclude cost function oscillates specified lemma statement pruning step clearly pick optimal clustering since centers points distance clusters points centers argument proved case set ensures cost function oscillates exactly times completes proof straightforward modify proof work major change set prove lemma proof lemma given prove claim hab constructing set samples log shattered hab able choose different values exists witnesses respect induces possible labelings choose sequence distinct arbitrarily range index terms sequence using notation iff satisfy given denote vector corresponding therefore smallest greater crucial step use lemma define examples witnesses labeling induced witnesses corre sponds vector means cost function must greater ith term less otherwise since implies sample values want cost flip accomplish using lemma choosing supposed switch labels manner pick thus creating sample size log shattered hab note lemma assumes pruning step fixes partition optimal centers chosen cluster partition points may switch clusters even closer center another cluster desirable instance applications much balanced partition desired pruning step outputs optimal centers clusters determined voronoi partition centers modify proof follows introduce points clustering instance merge cluster merge cluster set distances best centers distances also set cost voronoi tiling induced rlow cost rhigh sufficient argument furthermore lower bound holds even ifsthe cost function thessymmetric distance ground ground truth clustering proof let truth clustering interval increases cost function switches errors errors restate prove lemma lemma objective functions pdim prove start helper lemma lemma given setting exists clustering instance size set creates unique merge tree proof outline construction start two specific pairs points always merge first merges merges sets stay separated last merge operations throughout analysis point merging procedure denote current superset containing similarly denote superset next points merge come pairs construct distances always merge furthermore first merge merge opposite set let two merges called round finally set size merges together merges similarly set merges thus construction freedom whether merges combinations total crux proof show exists behaviors attack problem follows round following equation specifies whether merges lhs smaller merges otherwise carefully setting distances ensure exists value solution equation range merges assume easy force merge opposite set round two equations first equation specifies merges case second equation case must ensure exists specific solves equation solves equation solutions corresponding intervals general round equations corresponding possible states partially constructed tree state specific interval cause algorithm reach state must ensure equation exactly one solution interval achieve simultaneously every equation next round states see figure schematic clustering instance given let denote equation round determines merges case merged let denote single equation round let denote solution need show follow specific ordering shown figure ordering completely specified two conditions show set distances achieve properties enhance readability start example move general construction distances figure clustering instance used lemma figure schematic intervals edge denotes whether merge give construction round round distances ensures triangle inequality always satisfied set set pairwise distances also set distances first round say break ties lexicographic order merge first following unique solution merges merge equations otherwise long merge opposite cluster set significantly larger relevant distances set round two distances follows note distances since break ties lexicographically ensures merge round alternatively add tiny perturbations distances affect analysis ensure correct merge orders regardless tiebreaking rule values picked distances influence merge equations four distances value equality would round distances show equation set small offsets values either side equations following previous round straightforward check four merge equations send opposite cluster long far four intervals lead distinct behavior specify distances third round final round example set distances way previous rounds new distances follows first four points value would distances differ small offset distances differ even smaller offset causes latter distances less influence merge equations forces correct intervals general offset value decrease higher higher rounds equations follows solving obtain furthermore solving equations find merge opposite cluster therefore different ranges corresponding equations example suggests general argument induction follows intuition round new distances less less influence merge equations ensuring stay correct ranges double number behaviors argument utilize following fact true elementary calculus fact following true fixed nonincreasing fixed nonincreasing nondecreasing details general construction distances triangle inequality satisfied given offset values specify later following true first two merges always prefer merging instead merging another singleton first two merges occur tied first merge convenience specify tiebreaking order alternatively end make tiny perturbations distances tiebreaking occur next choose value must small enough ensure always merges opposite cluster consider equation positive always merge opposite cluster always merge opposite cluster similarly show setting note fact implies exists stays positive similarly exists cutoff value therefore long set offsets less merges follows merges merges merges merges opposite cluster always merge opposite cluster showi intervals give unique behavior recall defined brevity denote show correctly ordered proving following three statements induction first statement sufficient order second two help prove first exist solve satisfy proved base case earlier example assume exist satisfy three properties first prove inductive step second third statements inductive hypothesis know since finite integral values expression values exists expression values define long set inductive step second property fulfilled move third property following inductive hypothesis may similarly find move proving inductive step first property given let denote vectors sit either side ordering range set set define inductive hypothesis must show exists imply choose case since fact denote greatest index statement inductive hypothesis root statement inductive hypothesis know follows furthermore therefore denote fact exists case since fact property inductive hypothesis say expression equal fact exists combining recap cases showed exists min may perform similar analysis related function defined show exists perform analysis finally set minx given since must exist root fact function monotone short interval exactly one root similarly must exist root shown roots respectively construction condition satisfied need show condition satisfied given let largest number let inductive hypothesis follows proving condition completes induction ready prove lemma proof lemma given setting show exists clustering instance size witness set oscillates interval start using construction lemma gives clustering instance points values creates unique merge tree next part add points define witness cost function alternates along neighboring interval total oscillations finally finish proof manner similar lemma starting clustering instance lemma add two sets points interfere previous merges ensure cost functions alternates let distances two points similarly distances point point distances follows defined sets lemma specify distances soon start merge procedure points merge together points merge together merges lemma take place relevant distances smaller end four sets pairs dominated distances length merges occur dominate distances final merge occur however pruning step clearly pick since clustering tree almost distances construction best center beat similarly best center note centers currently give equivalent costs denote cost cost set distances set final distances follows best centers achieving cost best centers achieving cost distances also constructed variant pruning outputs optimal centers points allowed move closest center cost still oscillates first note points affected since similarly move cluster move cluster originally cost otherwise cost either scenario set ensured cost cost finished construction clustering instance whose cost function alternates times increases finish proof show exists set size shattered set orderings total use construction alternates times use construction eliminate rounds extra two points added preserve cost alternate times intervals oscillates every time oscillates increases general construction rounds oscillating times oscillation occurs every time oscillates ensures every unique labelings total labelings completes proof note lemma lower bound holds even cost function symmetric distance ground truth clustering merely let belong different ground truth clusters belong ground truth cluster since adjacent interval switch clusters shows symmetric distance ground truth clustering oscillates every interval give erm algorithm similar algorithm algorithm algorithm finding empirical cost minimizing algorithm input sample let sample solve solution exists following equation add solutions order elements set pick arbitrary interval run clustering instances compute let value minimizes output theorem let clustering objective let pruning function given input sample size log algorithm class respect cost function proof sample complexity analysis follows logic proof theorem prove algorithm indeed finds empirically best recall analysis cost function instance piecewise constant function discontinuities step algorithm solve values discontinuities occur add set therefore partitions range subintervals within intervals constant function therefore pick arbitrary within interval evaluate empirical cost samples find empirically best restricted classes clustering instances consider restricted classes clustering instances improve bounds compared lemma particular consider class consists clustering instances distances take one real values natural example would one distances integers less value case show tight bound theorem objective function let pdim min log proof proof upper bound follows similar line reasoning lemma particular instance let denote set values distances take linkage criterion merging expressed take one values corresponding number pairs points distance therefore iterating subsets like proof lemma list potential linkage criteria therefore set pairs subsets induce unique comparisons two linkage criteria argument proof lemma since comparison roots shatterable set means log theorem objective function pdim min proof lower bound use similar line reasoning lemma round construction lemma constant number distinct edge lengths added offsets define new distances per round set rest distances constant size therefore easily modify proof construct clustering instance rounds using distinct distances instance distinct behaviors depending however reasoning consistent may inherit lower bound lemma final lower bound min interpolation far linkage criteria based distances either two pairs points every single pair two sets considered merging provide interpolation two extremes particular define linkage criterion uses different distances sets comparison particular two sets define abstract rule pick pairs points example natural choice would pick quantiles set distances points along maximum minimum distances picking points define criterion function distances follows observe multiple parameters unlike classes algorithms discussed therefore analysis considerably different rest shown use notations similar previous sections theorem let pdim log proof proof parallels lemma consider two pairs sets potentially merged regardless parameters chosen know linkage criterion first chooses pairs points thep decision merge vice versa determined sign expression evaluates zero break ties arbitrarily consistently observe given set values expression either equal zero hyperplane passing origin normal parameter space hyperplane divides parameter space two correspond merging one pair sets next note given problem instance iterate pairs sets list possible choices hyperplane points thus problem instances list different hyperplanes hyperplanes partition parameter space regions parameter settings given region correspond identical merge trees hence identical induced witnesses argument similar proof lemma conclude log theorem let pdim min log proof proof follows reasoning lemma decision whether merge two pairs sets sign difference linkage criterion points fixed given pairs set know theorem expression roots furthermore iterate pairs sets generate many expressions points particular list expression roots summary similar proof lemma argue set samples shattered need log theorem pdim proof use similar line reasoning lemma round construction lemma merges set size set size therefore easily modify proof construct clustering instance rounds merge equations lemma instance distinct behaviors depending algorithm class linkage rule pseudodimension runtime learning algorithm log talg talg log talg unique distances min talg log talg table different classes algorithms corresponding linkage rule runtime talg denotes runtime algorithm arbitrary linkage rule
8
nov endomorphisms regular rooted trees induced action polynomials ring integers elsayed ahmed dmytro savchuk abstract show every polynomial defines endomorphism dary rooted tree induced action ring integers sections endomorphism also turn induced polynomials degree case permutational polynomials acting bijections induced endomorphisms automorphisms tree case polynomials completely characterized rivest main application utilize result rivest derive condition coefficients permutational polynomial necessary sufficient induce level transitive automorphism binary tree equivalent ergodicity action respect normalized haar measure introduction fixed integer every polynomial naturally induces mappings positive integers equivalently mappings induced action ring integers two equivalent approaches study polynomials used different contexts last several decades one first motivations came constructions generators sequences goes back knuth applications crucial consider polynomials acting permutations polynomials generally called permutational polynomials however important emphasize distinction polynomials class permutation polynomials permute elements finite fields fpn see chapter survey many cases stronger condition transitivity action required another type applications come cryptography rivest completely characterized polynomials act permutations points use one namely symmetric block cipher rrsy one five finalists aes competition questions ergodicity action permutational polynomials studied context dynamical systems anashin refer reader nice survey paper background history polynomial dynamics paper offer another view polynomials acting namely use tools theory groups acting rooted trees automorphisms groups generated mealy automata theory exploded many counterexamples well known conjectures group theory found among groups example grigorchuk group first example group intermediate growth well one first examples infinite finitely generated torsion groups see also rich theory connections combinatorics analysis holomorphic dynamics dynamical systems computer science many areas refer reader survey article history references key idea many arguments theory understanding automorphisms rooted trees describing sections terms states restrictions also widely used subtrees hanging vertices original rooted tree original tree regular every vertex number children subtrees canonically isomorphic original tree sections treated automorphisms original tree well utilize approach analyze action polynomials note connection functions boundary induced automata functions also established anashin criterion finiteness corresponding automaton terms van der put series function developed criterion provided application analysis theory automata paper suggest converse application set vertices rooted tree identified set finite words alphabet case level corresponds identifying boundary consisting infinite paths without backtracking initiating root identified ring integers show proposition interpretation polynomial induces endomorphism tree permutational polynomial induces automorphism first result describes structure endomorphisms theorem given polynomial inducing endomorphism image vertex mod section polynomial given equation div denotes ith derivative polynomial div quotient note case linear polynomials partially considered bartholdi main application deals permutational polynomials acting transitively terms action tree condition equivalent level transitive equivalently induces level transitive automorphism corresponding dynamical system minimal orbit element dense ergodic respect haar measure coinciding uniform bernoulli measure viewed cantor set proposition order state main result first review history problem following theorem prover larin gives conditions satisfy order transitive mod positive integer theorem polynomial transitive mod every positive integer satisfies following conditions iii mod mod mod mod rivest see alternative proof derived following conditions necessary sufficient polynomial induce permutation level hence automorphism tree theorem polynomial induces permutation positive integer satisfies following conditions mod mod iii mod using theorem study level transitivity permutational polynomial counting number nontrivial actions sections level proposition shown acts level transitively rooted binary tree number nontrivial actions sections level odd using fact determine conditions meet order level transitive conditions summarized following theorem main result paper theorem let permutational polynomial acting rooted binary tree action level transitive following conditions hold mod mod iii mod combining conditions theorems obtain conditions theorem using completely different approach hope new tool utilized attack problems analysis example suggested approach may work characterize ergodicity polynomials acting structure paper follows section set necessary notation regarding rooted trees automorphisms section describes endomorphisms automorphisms rooted trees arising polynomial actions ring integers finally section contains main result conditions permutational polynomial act level transitively binary tree equivalent ergodic action polynomial acknowledgement authors would like thank zoran said sidki fruitful motivating discussions preparation manuscript preliminaries start section notation terminology used throughout paper tree connected graph cycles rooted tree tree one vertex selected root connected graph metric called combinatorial metric defined distance pair vertices number edges shortest path geodesic connecting nth level rooted tree defined set vertices whose distance root since tree cycles vertex nth level one path root vertex path lies level called parent vertex called child hence every vertex except root exactly one parent may children rooted tree said exists positive integer vertex tree exactly children trees infinitely many levels case tree called rooted binary tree represents main interest paper always visualize trees grow top bottom root highest vertex children vertex located right label vertices rooted tree finite words finite alphabet equivalently set finite words given structure rooted tree declaring adjacent thus empty word corresponds root positive integer set corresponds nth level tree example rooted binary tree shown figure figure standard numbering vertices binary tree identify nth level ring identifying vertex example vertices second level rooted binary tree identified respectively shown figure moreover boundary tree naturally identified ring integers way identified nth level may natural way natural way identify vertex expansion vertices second level binary tree identified respectively shown figure however adopt first identification mappings induced polynomials preserve adjacency relation see later definition endomorphism map set vertices preserves adjacency relation map bijective called automorphism automorphism preserves degree vertex well distance vertex root since root vertex degree invariant even odd figure dyadic numbering vertices binary tree tree automorphisms also levels tree invariant automorphisms since distance preserved group automorphisms denoted aut another important concept want introduce definition section endomorphism vertex definition let endomorphism word map given clearly defines endomorphism called section vertex inductively define section vertex order fully define action endomorphism need specify action first level well sections vertices first level see action second level sections vertices first level case endomorphism rooted binary tree action first level either trivial switch another language define endomorphism give wreath recursion also specifies action first level sections vertices first level language makes computations easier since computations endomorphisms approach used visualize action using called full portrait full portrait labeled infinite rooted tree root labeled name endomorphism vertex labeled vertex usually write name mapping defines first level subtree hanging case rooted binary tree draw little arc called switch connecting two edges hanging acts nontrivially first level subtree hanging switch means action trivial psfrag replacements figure full portrait adding machine one basic examples automorphisms rooted binary tree adding machine denote throughout paper gets name fact action boundary tree identified ring equivalent adding one input sections vertices respectively identity automorphism acts nontrivially first level full portrait shown figure next definition introduces notion level transitivity core concept paper also necessary sufficient condition automorphism act level transitively rooted binary tree provided next theorem partial case general result proposition definition automorphism said act level transitively acts transitively level proposition let automorphism rooted binary tree acts level transitively full portrait odd number switches nontrivial actions level including zeroth level remark proof theorem involves induction level tree quite short leave exercise point even though external result use paper proof included essentially extra cost affect claim proof self contained modulo result rivest characterization permutational polynomials example see figure adding machine exactly one switch level last theorem asserts acts level transitively rooted binary tree endomorphisms rooted trees arising polynomials fixed integer polynomial induces mappings positive integers taking evaluation map modulo identifying nth level rooted tree ring polynomial gives rise mapping whole tree next proposition show mapping always endomorphism rooted tree preserves root adjacency relation mapping bijection induces permutation level rooted tree hence induces automorphism use term permutational polynomial denote polynomial induces automorphism denote automorphism induces well letter used refer functions confusion arise proposition let induces endomorphism rooted tree moreover different polynomials induce different endomorphisms proof pick two adjacent vertices tree parent let since mod follows mod thus uniqueness expansion obtain hence parent means preserves adjacency relation endomorphism consider two different polynomials find integer let smallest positive integer mod actions level different next goal completely describe endomorphisms induced polynomials explicitly describing sections vertices proceed next theorem need introduce basic notation make expressions less cumbersome notation given two integers use division algorithm find two unique integers adopt notation div remainder divided integer always dqd theorem given polynomial inducing endomorphism image vertex induced endomorphism section polynomial given equation proof pick vertex prefix write corresponds suffix word equation using taylor expansion fact according equality dqd obtain therefore slight abuse notation left hand side denote vertices first level right hand side element finally remark equation immediately implies sections polynomials degree degree also since sections level leading coefficient example sections polynomial vertices first three levels tree depicted figure proposition polynomial acting finitely many distinct sections linear proof first show linear polynomial acting finitely many sections equation see section vertex given hence sections linear polynomials leading coefficient number sections bounded number distinct constant terms sections since constant term written exactly summands equal enough notice collection integers max therefore section form max fact nonlinear polynomial acting infinitely many sections follows immediately remark proposition turn attention permutational polynomials inducing automorphisms first recall definition definition polynomial said simply permutational clear context mapping induced evaluation homomorphism permutation according proposition permutational polynomial induces automorphism tree following simple remark follows immediately definition remark sections permutational polynomial acting permutational replacements figure sections automorphism induced polynomial set linear polynomials obviously forms group operation composition however set polynomials form group closed taking inverses fact structure cancellative monoid shown next proposition proposition set polynomials forms cancellative monoid operation composition proof clear composition two polynomials polynomial induces automorphism rooted tree polynomial plays role identity clearly induces identity automorphisms tree structure monoid inverse automorphism induced permutational polynomial always exists always induced polynomial although case linear permutational polynomials inverse nonlinear permutational polynomial induced polynomial true would two polynomials linear acting ring boundary trivial composition impossible still existence inverse makes cancellation legitimate remark consider action polynomial specific level identified inverse always induced permutational polynomial shown rest paper consider permutational polynomials acting rooted binary tree next theorem introduced rivest determines conditions polynomial induces permutation hence automorphism polynomial theorem polynomial induces permutation satisfies following conditions mod mod iii mod theorem put restriction constant term polynomial assuming satisfies conditions theorem induces permutation even odd evens mapped evens odds odds mapped odds evens special case theorem linear defines permutation odd permutational polynomial always mean polynomial also drop subscript notation function write div according equation sections permutational polynomial acting vertices given two equations tat case linear polynomial example adding machine introduced preliminary section represented linear permutational polynomial sections respectively according theorem group linear polynomials isomorphic group odd generating set group generators infinite orders except involution shown subgroup generated group pqm adopt convention expression used denote composition two functions means function acts first paper bartholdi considered sections linear polynomials acting end section introducing one notation stating simple lemma used section simplify proof main result notation suppose permutational polynomial uniquely defines following integers following lemma particular shows every permutational polynomial section another permutational polynomial lemma let permutational polynomial acting rooted binary tree vertex permutational polynomial corresponding section satisfies mod mod proof clearly enough check conditions vertices first level trivially follows equations notice coefficients divisible coefficients divisible level transitivity permutational polynomials acting rooted binary tree start section presenting couple basic number theoretic facts used many times proof main theorem introduce bunch lemmas without proofs proofs straight forward leave simple exercises first lemma gives properties function defined last section lemmas follow fact positive integer mod integers mod lemma three integers iii mod mod mod mod lemma let collection odd numbers sequence integers mod parity lemma let integers mod mod lemma let collection odd numbers sequence integers mod time introduce main theorem determines conditions permutational polynomial acts level transitively rooted binary tree obviously permutational polynomial acts nontrivially first level tree constant term odd according proposition permutational polynomial acts level transitively level tree number sections odd constant terms odd equivalently sum constant terms odd next proposition determines conditions linear permutational polynomial meet order act level transitively general result given theorem prove induction level proof linear case provided first idea induction becomes clear general case considered actually proof given essentially different proofs similar results introduced proposition let permutational polynomial acting rooted binary tree action level transitive following conditions hold mod mod proof first show conditions necessary condition satisfied act transitively first level assume condition satisfied condition also use fact odd since permutational thus write integers odd sections respectively sum two constant terms even therefore act transitively second level prove sufficiency conditions use induction level first write integer follows condition since odd acts transitively first level assume sections level respectively suppose mod serves induction hypothesis sections level respectively hence sum constant terms sections level mod used lemma part proof complete adding machine well odd powers odd satisfy conditions last proposition act level transitively rooted binary tree theorem let permutational polynomial acting rooted binary tree action level transitive following conditions hold mod mod iii mod proof show given conditions necessary first notice condition satisfied act transitively first level assume condition satisfied hence write integer since permutational satisfies three conditions theorem three conditions could written respectively integers adding sides condition iii simple algebraic manipulation conditions iii rewritten mod mod equations tell constant terms sections respectively sum used lemma part iii act transitively second level must mod exactly one two conditions satisfied mod however two conditions satisfied mod act transitively second level show latter case act transitively third level proving sum constant terms sections second level even equations infer constant terms sections vertices respectively tat sum constant terms modulo used lemma part using part iii lemma reusing part well sum modulo simplifies mod used lemma obtain apply part iii lemma sum modulo mod used fact two conditions satisfied prove sufficiency conditions use fact permutational write integers easily infer mod mod last equality equivalent equality show acts transitively first second levels use induction lower levels condition guarantees transitivity first level deduced proof necessity transitivity second level equivalent equality automatically satisfied case shown formulate induction hypothesis recall vertex section induced permutational polynomial defines integers also vertex mod mod lemma let assume sections level tree suppose mod serves induction hypothesis let sections level mod induction hypothesis used lemma part iii rearranged terms finally exploited lemma last transition claim mod show every permutational polynomial satisfying two conditions mod mod mod enough prove claim first thus equations tbt using lemma write sum modulo mod equations write modulo mod last equivalence comes applying lemma therefore sum mod lemma mod mod proof complete thus mapping acting rooted binary tree level transitive orbit every element boundary tree dense thus dynamical system minimal also anashin proved polynomial ergodic respect normalized haar measure transitive mod every positive integer combining theorem theorem thus obtain new elementary proof result larin theorem let dynamical system minimal equivalently ergodic respect normalized haar measure following conditions satisfied mod mod iii mod mod references finite automata burnside problem periodic groups mat zametki anashin uniformly distributed sequences computer algebra construct program generators random numbers math sci new york computing mathematics cybernetics vladimir anashin ergodic transformations space integers mathematical physics volume aip conf pages amer inst melville anashin automata finiteness criterion terms van der put series automata functions numbers ultrametric anal bondarenko grigorchuk kravchenko muntyan nekrashevych savchuk classification groups generated automata alphabet algebra discrete available http hyman bass maria victoria daniel rockmore charles tresser cyclic renormalization automorphism groups rooted trees volume lecture notes mathematics berlin laurent bartholdi zoran solvable automaton groups topological asymptotic aspects group theory volume contemp pages amer math providence fan polynomial dynamics https rostislav grigorchuk peter linnell thomas schick andrzej question atiyah acad sci paris grigorchuk nekrashevich automata dynamical systems groups mat inst steklova din avtom beskon gruppy grigorchuk burnside problem periodic groups funktsional anal grigorchuk milnor problem group growth dokl akad nauk sssr narain gupta said sidki burnside problem periodic groups math rostislav grigorchuk zoran asymptotic aspects schreier graphs hanoi towers groups math acad sci paris rostislav grigorchuk dmytro savchuk ergodic decomposition group actions rooted trees mat inst steklova algebra geometriya teoriya chisel donald knuth art computer programming vol publishing reading second edition seminumerical algorithms series computer science information processing larin transitive polynomial transformations residue rings diskret rudolf lidl harald niederreiter finite fields volume encyclopedia mathematics applications publishing company advanced book program reading foreword cohn alexei miasnikov dmytro savchuk example automatic graph intermediate growth ann pure appl logic smile markovski zoran danilo gligoroski polynomial functions units quasigroups related systems volodymyr nekrashevych groups volume mathematical surveys monographs american mathematical society providence ronald rivest permutation polynomials modulo finite fields rrsy ronald rivest robshaw sidney yin block cipher posted site rsa laboratories slides nist conferences sushchansky periodic permutation unrestricted burnside problem dan russian department mathematics statistics university south florida fowler ave tampa department mathematics statistics university south florida fowler ave tampa savchuk
4
fully decentralized policies systems information theoretic approach roel david claire tomlin august jul abstract learning cooperative policies systems often challenged partial observability lack coordination settings structure problem allows distributed solution limited communication consider scenario communication available instead learn local policies agents collectively mimic solution centralized static optimization problem main contribution information theoretic framework based rate distortion theory facilitates analysis well resulting fully decentralized policies able reconstruct optimal solution moreover framework provides natural extension addresses nodes agent communicate improve performance individual policy introduction finding optimal decentralized policies multiple agents often hard problem hampered partial observability lack coordination agents distributed problem approached variety angles including distributed optimization boyd game theory aumann dreze decentralized networked partially observable markov decision processes pomdps oliehoek amato goldman zilberstein nair paper analyze different approach consisting simple learning scheme design fully decentralized policies agents collectively mimic solution common optimization problem access global reward signal either restricted access agents local state algorithm generalization proposed prior work sondermeijer related decentralized optimal power flow opf indeed success decentralization opf domain motivated understand well method works general decentralized optimal control setting key contribution work view decentralization compression problem apply classical results information theory analyze performance limits specifically treat ith agent optimal action centralized problem random variable model conditional dependence global state variables assume stationary time restrict agent observe ith state variable rather solving decentralized problem directly train agent replicate would done full information centralized case vector state variables compressed ith agent must decompress compute estimate approach agent learns parameterized markov control policy via regression learned data set containing local states taken historical measurements system state corresponding optimal actions computed solving offline centralized optimization problem context analyze fundamental limits compression particular interested unraveling relationship dependence structure corresponding ability agent partial information approximate optimal solution difference distortion decentralized action type relationship well studied within information theory literature instance rate distortion theory cover thomas chapter classical results field provide means finding lower bound expected distortion function mutual information rate communication lower bound valid specified distortion metric arbitrary strategy computing available data moreover able leverage similar result provide conceptually simple algorithm choosing communication structure letting regressor depend local states way lower bound expected distortion minimized method generalizes sondermeijer provides novel approach design analysis decentralized optimal policies general systems demonstrate results synthetic examples real example drawn solving opf electrical distribution grids indicates equal contribution dobbe david claire tomlin department electrical engineering computer sciences university california berkeley usa dobbe dfk tomlin roel related work decentralized control long studied within system theory literature lunze siljak recently various decomposition based techniques proposed distributed optimization based primal dual decomposition methods require iterative computation form communication either central node boyd connected graph raffard sun distributed model predictive control mpc optimizes networked system composed subsystems time horizon decentralized communication dynamic interconnections subsystems weak order achieve stability well performance christofides work zeilinger extended systems strong coupling employing distributed terminal set constraints requires communication another class methods model problems agents try cooperate common objective without full state information decentralized partially observable markov decision process oliehoek amato nair introduce networked distributed pomdps variant inspired part pairwise interaction paradigm distributed constraint optimization problems dcops although specific algorithms works differ significantly decentralization scheme consider paper larger difference problem formulation described sec study static optimization problem repeatedly solved time step much prior work especially optimal control mpc reinforcement learning poses problem dynamic setting goal minimize cost time horizon context reinforcement learning time horizon long leading well known tradeoff exploration exploitation appear static case additionally many existing methods dynamic setting require ongoing communication strategy agents though peshkin even static problems dcops tend require complex communication strategies modi although mathematical formulation approach rather different prior work policies compute similar spirit learning robotic techniques proposed behavioral cloning sammut apprenticeship learning abbeel aim let agent learn examples addition see parallel recent work bounded rationality ortega seeks formalize limited resources time energy memory computational effort allocated arriving decision work also related swarm robotics brambilla learns simple rules aimed design robust scalable flexible collective behaviors coordinating large number agents robots general problem formulation consider distributed problem defined graph denoting nodes network cardinality representing set edges nodes fig shows prototypical graph sort node state vector subset nodes cardinality controllable hence termed agents action variable let denote full network state vector stacked network optimization variable physical constraints spatial coupling captured equality constraints addition system subject inequality constraints incorporate limits due capacity safety robustness etc interested minimizing convex scalar function encodes objectives pursued cooperatively agents network want find arg min note static sense consider future evolution state corresponding future values cost apply static problem sequential control tasks repeatedly solving time step note simplification explicitly dynamic problem formulation one objective function incorporates future costs purely ease exposition consistency opf literature sondermeijer could also consider optimal policy solves dynamic optimal control problem decentralized learning step sec would remain since static applying learned decentralized policies repeatedly time may lead dynamical instability identifying occur key challenge verifying decentralization method however beyond scope work decentralized learning interpret process solving applying function stationary markov policy maps input collective state optimal collective control action presume solution exists computed offline objective learn decentralized policies one agent based historical measurements states offline computation corresponding optimal actions distributed problem graphical model dependency structure figure shows connected graph corresponding distributed system circles denote local state agent dashed arrow denotes action double arrows denote physical coupling local state variables shows markov random field mrf graphical model dependency structure variables decentralized learning problem note state variables optimal actions form fully connected undirected network local policy depends local state local training sets system centralized optimization data gathering decentralize decentralize decentralized dlearning dlearning learning optimal data approximate local policies figure flow diagram explaining key steps decentralized regression method depicted example system fig first collect data system solve centralized optimization problem using data data split smaller training test sets agents develop individual decentralized policies approximate optimal solution centralized problem policies implemented system collectively achieve common global behavior although policy individually aims approximate based local state able reason well collective action approximate figure summarizes decentralized learning setup formally describe dependency structure individual policies markov random field mrf graphical model shown fig allowed depend local state may depend full state model determine information distributed among different variables constraints policies subject collectively trying reconstruct centralized policy note although may refer globally optimal actually required reason closely approximate analysis holds even solved using approximate methods dynamical reformulation example could generated using techniques deep framework approach problem well decentralized policies perform theory perspective rate distortion rate distortion theory information theory provides framework understanding computing minimal distortion incurred given compression scheme rate distortion context interpret fact output individual policy depends local state compression full state detailed overview see cover thomas chapter formulate following variant classical rate distortion problem denotes mutual information arbitrary distortion measure usual minimum distortion random variable reconstruction may found minimizing conditional distributions novelty lies structure constraints typically written function maximum rate mutual information fig however know pairs reconstructed optimal actions share information contained intermediate nodes graphical model share information simple consequence data processing inequality cover thomas thm similarly reconstructed optimal actions two different nodes closely related measurements computed resulting constraints fixed joint distribution state optimal actions fully determined structure optimization problem wish solve emphasize made virtually assumptions distortion function remainder paper measure distortion deviation however could also define suboptimality gap may much complicated compute definition could allow reason explicitly cost decentralization could address valid concern optimal decentralized policy may bear resemblance leave investigation future work example squared error jointly gaussian provide intuition rate distortion framework consider idealized example let squared error distortion measure assume state optimal actions jointly gaussian assumptions allow derive explicit formula optimal distortion corresponding regression policies begin stating identity two jointly gaussian correlation follows immediately definition mutual information formula entropy gaussian random variable taking correlation variances respectively assuming equal mean unbiased policies show minimum distortion attainable min min solved optimal correlations unsurprisingly optimal value turns maximum allowed mutual information constraint correlated possible particular much correlated similarly solve optimal result optimum means correlation local state optimal action decreases variance estimated action decreases well result learned policy increasingly bet mean listen less local measurement approximate optimal action moreover may also provide closed form expression regressor achieves minimum distortion since assumed state jointly gaussian may write affine function plus independent gaussian noise thus minimum mean squared estimator given conditional expectation thus found closed form expression best regressor predict joint gaussian case squared error distortion result comes direct consequence knowing true parameterization joint distribution case gaussian determining minimum distortion practice often practice know parameterization hence may intractable determine corresponding decentralized policies however one assume belongs family parameterized functions instance universal function approximators deep neural networks theoretically possible attain least approach minimum distortion arbitrary distortion measures practically one compute mutual information constraint understand much information regressor available reconstruct gaussian case able compute mutual information closed form data general distributions however often way compute mutual information analytically instead rely access sufficient data order estimate mutual informations numerically situations sec discretize data compute mutual information minimax risk estimator proposed jiao allowing restricted communication suppose decentralized policy suffers insufficient mutual information local measurement optimal action case would like quantify potential benefits communicating nodes order reduce distortion limit improve ability reconstruct section present solution problem choose optimally data observe provide lower solution idealized gaussian case introduced sec assume addition observing local state allowed depend theorem restricted communication set nodes allowed observe addition setting arg max minimizes expectation distortion measure choice yields smallest lower bound possible choice proof assumption maximizes mutual information observed local states optimal action mutual information equivalent notion rate classical rate distortion theorem cover thomas distortion rate function convex monotone decreasing thus maximizing mutual information guaranteed minimize distortion hence theorem provides means choosing subset state communicate decentralized policy minimizes corresponding best expected distortion practically speaking result may interpreted formalizing following intuition best thing transmit case transmitting information corresponds allowing observe set nodes contains information likewise best mean minimizes expected distortion distortion metric sec without making assumption structure distribution guarantee particular regressor attain nevertheless practical situation sufficient data available solve estimating mutual information jiao example joint gaussian squared error communication reexamine joint mean squared error distortion case sec apply thm take jointly gaussian zero mean arbitrary covariance specific covariance matrix joint distribution visualized fig simplicity show squared correlation coefficients lie boxed cells fig indicate solves maximizes mutual information observed data regression target intuitively choice best highly correlated weakly correlated already observed conveys significant amount information already conveyed figure shows empirical results along horizontal axis increase value number additional variables regressor observes vertical axis shows resulting average distortion show results linear regressor form chosen optimally according well uniformly random possible sets unique indices note optimal choice yields lowest average distortion choices moreover linear regressor achieves since assumed gaussian joint distribution application optimal power flow case study aim minimize voltage variability electric grid caused intermittent renewable energy sources increasing load caused electric vehicle charging controlling reactive power output distributed energy resources ders adhering physics power flow constraints due energy capacity safety recently various approaches proposed farivar zhang methods ders tend rely extensive communication infrastructure either central master node optimal strategy average random strategy mse additional observations squared correlation coefficients comparison communication strategies figure results optimal communication strategies synthetic gaussian example shows squared correlation coefficients boxed entries correspond found optimal shows optimal communication strategy thm achieves lowest average distortion outperforms average random strategies agents leveraging local computation dall anese study decentralization outlined sec fig optimal power flow opf problem low initially proposed sondermeijer apply thm determine communication strategy minimizes optimal distortion improve reconstruction optimal actions solving opf requires model electricity grid describing topology impedances represented graph clarity exposition without loss generality introduce linearized power flow equations radial networks also known lindistflow equations baran pij pjk pcj pgj qij qjk qjc qjg rij pij qij model capitals pij qij represent real reactive power flow branch node node branches lower case pci qic real reactive power consumption node real reactive power generation complex line impedances rij indexing power flows lindistflow equations use squared voltage magnitude defined indexed nodes equations included constraints optimization problem enforce solution adheres laws physics formulate decentralized learning problem treat pci qic pgi local state variable controllable nodes agents qig reactive power generation controlled pij qij treated dummy variables assume nodes consumption pci qic real power generation pgi predetermined respectively demand power generated potential photovoltaic system action space constrained reactive power capacity addition voltages maintained within expressed constraint opf problem reads vref arg min following fig employ models real electrical distribution grids including ieee test feeders ieee pes equip historical readings load data composed real smart meter measurements sourced pecan street solve data yielding set minimizers separate overall data set smaller data sets train linear policies feature kernels parameters form practically challenge select best feature kernel extend earlier work showed decentralized learning opf done satisfactorily via hybrid selection algorithm friedman chapter uses quadratic feature kernels figure shows result electric distribution grid model based real network arizona network nodes simulation nodes equipped controllable der fig show voltage deviation normalized setpoint simulated network data used training improvement baseline striking performance nearly identical optimum achieved linear random linear optimal quadratic random quadratic optimal mse additional observations voltage output without control comparison opf communication strategies figure results decentralized learning opf problem shows example result decentralized learning shaded region represents range voltages network full day compared control fully decentralized control reduces voltage variation prevents constraint violation dashed line shows optimal communication strategy outperforms average random strategies mean squared error distortion metric regressors used stepwise linear policies linear quadratic features centralized solution concretely observed constraint violations suboptimality deviation average maximum deviation compared optimal policy addition applied thm opf problem smaller network ieee pes order determine optimal communication strategy minimize squared error distortion measure fig shows mean squared error distortion measure increasing number observed nodes shows optimal strategy outperforms average random strategies conclusions future work paper generalizes approach sondermeijer solve static optimal control problems decentralized policies learned offline historical data rate distortion framework facilitates principled analysis performance decentralized policies design optimal communication strategies improve individual policies techniques work well model sophisticated opf example still many open questions decentralization well known strong interactions different subsystems may lead instability suboptimality decentralized control problems davison chang natural extensions work address dynamic control problems explicitly stability analysis topic ongoing work also analysis suboptimality decentralization possible within rate distortion framework finally worth investigating use deep neural networks parameterize distribution local policies complicated decentralized control problems arbitrary distortion measures references abbeel apprenticeship learning via inverse reinforcement learning international conference machine learning new york usa acm aumann dreze cooperative games coalition structures international journal game theory baran optimal capacitor placement radial distribution systems ieee transactions power delivery boyd parikh chu peleato eckstein distributed optimization statistical learning via alternating direction method multipliers foundations trends machine learning july brambilla ferrante birattari dorigo swarm robotics review swarm engineering perspective swarm intelligence mar christofides scattolini pena liu distributed model predictive control tutorial review future research directions computers chemical engineering cover thomas elements information theory john wiley sons dall anese dhople giannakis optimal dispatch photovoltaic inverters residential distribution systems sustainable energy ieee transactions url http davison chang decentralized stabilization pole assignment general proper systems ieee transactions automatic control farivar chen low equilibrium dynamics local voltage control distribution systems ieee annual conference decision control cdc pages doi friedman hastie tibshirani elements statistical learning volume springer series statistics springer berlin goldman zilberstein decentralized control cooperative systems categorization complexity analysis artif int issn url http ieee pes ieee distribution test feeders url http jiao venkat han weissman minimax estimation functionals discrete distributions arxiv preprint june arxiv low convex relaxation optimal power flow part formulations equivalence ieee transactions control network systems mar lunze feedback control large scale systems prentice hall ptr upper saddle river usa isbn modi shen tambe yokoo adopt asynchronous distributed constraint optimization quality guarantees artif issn doi url http nair varakantham tambe yokoo networked distributed pomdps synthesis distributed constraint optimization pomdps aaai volume pages oliehoek amato concise introduction decentralized pomdps springer international publishing edition ortega braun dyer kim tishby bounded rationality arxiv preprint pecan street dataport url http peshkin kim meuleau kaelbling learning cooperate via policy search proceedings sixteenth conference uncertainty artificial intelligence uai pages san francisco usa morgan kaufmann publishers isbn url http zeilinger jones inexact fast alternating minimization algorithm distributed model predictive control conference decision control los angeles usa ieee raffard tomlin boyd distributed optimization cooperative agents application formation flight conference decision control nassau bahamas ieee sammut automatic construction reactive control systems using symbolic machine learning knowledge engineering review siljak decentralized control complex systems dover books electrical engineering dover new york url http sondermeijer dobbe arnold tomlin keviczky inverter control decentralized optimal power flow voltage regulation power energy society general meeting boston usa july ieee sun phan ghosh fully decentralized optimal power flow algorithms power energy society general meeting vancouver canada july ieee dong zhang hill coordinated control high renewablepenetrated distribution systems ieee transactions power systems issn doi zeilinger riverso jones plug play distributed model predictive control based distributed invariance optimization conference decision control florence italy ieee zhang lam tse optimal distributed method voltage regulation power distribution systems ieee transactions power systems issn doi
7
permutation monoids structures feb thomas david robert february abstract paper investigate connection infinite permutation monoids bimorphism monoids structures taking lead study automorphism groups structures infinite permutation groups recent developments field structures establish series results underline connection particular interest idea relational structure every monomorphism finite substructures extends bimorphism results question include characterisation closed permutation monoids theorem structures construction pairwise countable graphs prove finite group arises automorphism group graph use construct oligomorphic permutation monoids given finite group units also consider various examples homogeneous structures particular give complete classification countable homogeneous undirected graphs also keywords bimorphisms cancellative monoids permutation monoids oligomorphic transformation monoids structures infinite graph theory mathematics subject classification let structure automorphism group aut structure whatever preserved automorphisms automorphism group key concept understanding model theory every automorphism permutation domain hence view aut infinite permutation group much existing literature field explores connections infinite permutation group theory model theory see instance recent work studied endomorphism monoid end structure analogously examples infinite transformation monoids imposing additional conditions type endomorphism obtain various school mathematics statistics university andrews andrews united kingdom email tdhc department mathematics imperial college london south kensington campus london united kingdom email school mathematics university east anglia norwich united kingdom email work supported epsrc grant special inverse monoids subgroups structure geometry rewriting systems word problem submonoids end various natural monoids transformations associated structure studied lockett truss principal aim papers cameron pech lockett truss hartman focus generalising current theory infinite permutation groups case infinite transformation monoids particularly case end research end restricted finding analogues results aut understanding end key study polymorphism clone pol hence complexity constraint satisfaction problems connection provides motivation studying endomorphism monoids structures extensively studied bodirsky bodirsky bodirsky pinsker collection bijective endomorphisms structure domain forms monoid composition operation call bimorphism monoid structure denote course every automorphism bijective homomorphism general converse hold hence contains automorphism group aut group units also contained symmetric group sym since every element bijection therefore bimorphism monoid gives natural example permutation monoid monoid element permutation fact shall see countable set closed submonoids symmetric group sym pointwise convergence topology precisely bimorphism monoids structures domain theorem although natural concept permutation monoids received much attention literature definition every permutation monoid monoid study monoids principal interest early semigroup theorists well known result ore theorem see says monoid embeddable group cancellative satisfies ore condition furthermore monoid faithful representation monoid permutations equivalently groupembeddable isomorphic submonoid symmetric group permutation monoid result reyes see states subgroup infinite symmetric group closed automorphism group structure examples highly symmetric infinite permutation groups abundant literature often arising automorphism groups structures nice structural conditions specifically homogeneity structure unique isomorphism countable model theory homogeneous every isomorphism finite substructures extends automorphism two related every homogeneous structure finite relational language structure homogeneous quantifier elimination finding examples structures provides corresponding examples interesting permutation groups famous theorem engeler svenonius see structure aut finitely many orbits every groups called oligomorphic permutation groups follows potential source oligomorphic permutation groups automorphism groups homogeneous structures finite relational language celebrated theorem gives characterisation homogeneous structures used construct many examples structures oligomorphic automorphism group structures include countable dense linear order without endpoints random graph generic poset followed time complete classification results countable homogeneous structures posets schmerl undirected graphs lachlan woodrow directed graphs cherlin study infinite permutation monoids bimorphism monoids structures natural analogue homogeneity structure every finite partial monomorphism extends bimorphism notion homogeneity first introduced lockett truss determined conditions using existence generic bimorphisms structure results shown authors classified hence posets result natural extension every finite partial monomorphism extends monomorphism widely considered instance cameron demonstrated result construction uniqueness mmhomogeneous structures aim paper develop theory infinite permutation monoids particular focus bimorphism monoids structures begin section recalling definition oligomorphic transformation monoid extend results paper considering monoids introduced lockett truss section investigates permutation monoids detail including characterisation closed permutation monoids theorem theorem propositions section devoted graphs including establishing useful properties graphs introducing notion bimorphism equivalence definition demonstrating exist nonisomorphic graphs bimorphism equivalent random graph theorem furthermore show finite group exists graph aut theorem consequently exists oligomorphic permutation monoid group units finally section assesses selection previously known homogeneous structures determine whether culminating complete classification countably infinite graphs homogeneous theorem throughout article maps act right arguments compose maps left right relational signature consists collection relations arity equality consists domain subsets interpreting follow usual convention notation regarding relations write rim structures countably infinite unless stated otherwise oligomorphic transformation monoids let countably infinite set let infinite submonoid end transformation monoid acting set via extend action tuples acting componentwise begin outlining important definitions regarding notion orbits tuples transformation monoid versions appear definition let end transformation monoid acting tuples let group units define forward orbit tuple set define strong orbit set define group orbit set note immediately tuple furthermore relation strong orbit equivalence relation comparison forward orbit reflexive transitive thus preorder may symmetric notion therefore differs slightly steinberg weak orbits obtained taking closure forward orbit preorder outline basic lemma regarding orbits useful throughout section proof omitted lemma let transformation monoid acting set tuples tuple following recall permutation group sym acts oligomorphically finitely many orbits every action componentwise tuples oligomorphic say oligomorphic permutation group next definition originally places concepts context transformation monoids definition say transformation monoid end acts oligomorphically finitely many strong orbits every componentwise action tuples oligomorphic call oligomorphic transformation monoid note group strong orbits group orbits definitions coincide oligomorphic permutation group oligomorphic transformation monoid next result provides connections oligomorphic permutation groups oligomorphic transformation monoids generalising lemma proposition let end transformation monoid group units oligomorphic permutation group oligomorphic transformation monoid proof oligomorphic permutation group finitely many group orbits every every strong orbit arises union group orbits conclude finitely many strong orbits acting every natural number lemma remark theorem see oligomorphic permutation group automorphism group structure proposition fact aut acts group units endomorphism monoid conclude end epi mon emb oligomorphic transformation monoid see definitions various transformation monoids result provides numerous examples oligomorphic transformation monoids caveat closely related structures via group units main result section distances notion oligomorphicity monoids providing different source suitable examples first detail preliminary conditions way homogeneous structures finite language provide examples structures hence oligomorphic permutation groups turn provide examples oligomorphic transformation monoids table recall eighteen different notions presented two papers lockett truss isomorphism monomorphism homomorphism end epi mon emb aut table table structure finite partial map type column extends map type row associated monoid note structure also shorthand next result denote monoid maps type structure example endomorphism monoid automorphism group previous observation lockett truss says endomorphism finite relational structure automorphism bijection lemma finite bijective homomorphisms isomorphisms proof composition map bijective endomorphism observation must automorphism ani rib ria homomorphism since automorphism contradiction must preserve therefore isomorphism similar argument applies show isomorphism proposition let structure domain two tuples strong orbit exists partial isomorphism proof suppose maps respectively sends elements elements contradiction hence restrictions injective maps also bijections structures induced consider maps respectively lemma see define conversely see isomorphism structures hence extend map similarly extend map map strong orbit move proving final result section result states aut acting number group orbits finite number group orbits distinct elements finite every using fact proposition maps two tuples strong orbit bijections hard show similar result holds number strong orbits acting together proposition proves following theorem theorem structure finite relational language oligomorphic transformation monoid proof finite relational language finitely many isomorphism types distinct elements proposition finitely many strong orbits distinct elements every result follows observation using theorem find examples structures oligomorphic transformation monoids instance poset schmerl classification see oligomorphic endomorphism monoid notable instance corollary exists countably infinite graph oligomorphic endomorphism monomorphism monoid trivial automorphism group particular interest paper idea oligomorphic permutation monoid follows special case definition transformation monoid also permutation monoid corollary theorem structure finite relational language oligomorphic permutation monoid follows finding structures give interesting examples permutation monoids motivates study mbhomogeneous structures section permutation monoids among endomorphism monoids structure mentioned introduction collection bimorphisms set endomorphisms submonoid end however collection bijective maps submonoid sym symmetric group domain follows monoid acts permutations opposed transformations thus countably infinite expressed fashion infinite permutation monoid section devoted study infinite permutation monoids outlining general properties presenting method constructing interesting infinite permutation monoids via structures start recall symmetric group sym countably infinite set natural topology given pointwise convergence basis open sets given cosets stabilizers tuples see sym closed pointwise convergence topology automorphism group structure domain generalised cameron closed submonoids end product topology occurs endomorphism monoid structure domain first proposition provides analogous result closed permutation monoids proof similar results along lines recall following standard result topology theorem subspace topological space set closed subspace topology inherited closed set furthermore pointwise convergence topology sym topology induced set end via inclusion map sym subspace end theorem let countable set submonoid sym closed pointwise convergence topology bimorphism monoid structure domain proof begin converse direction suppose bimorphism monoid structure domain sym subspace end intersection sym closed set end end follows result closed sym forward direction assume closed submonoid sym define relation let relational structure relations tuples proof containment ways every element already permutation domain proving acts endomorphisms enough show assume holds happens exists therefore holds end hence remains show suppose aim show limit point closed must contain limit points note defines neighbourhood consisting functions monoid follows holds holds definition exists hence limit point therefore completing proof remark result descriptive set theory closed subset polish space polish space induced topology see sym polish space follows also polish space bimorphisms structures provide natural examples polish monoids leave area investigation open aim determine cardinality result closed submonoids sym define pointwise stabilizer set note also monoid therefore cancellative proposition countably infinite structure either first alternative holding pointwise stabilizer tuple identity proof assume let collection permutations embeds sym action extends action consequence exists unique hence fact countably many tuples forced conclude countable case hand suppose tuples countably infinite enumerate elements using enumeration define sequence tuples since tuples element exists means sequence elements infinite sequence bimorphism identity element sequence tuples eventually encapsulate every element sequence bimorphisms approaches pointwise stabilizer identity element limit point consider sequence cancellativity implies contradicting earlier assumption limit point sequence every element limit point means perfect set thus cardinality continuum mentioned introduction automorphism groups homogeneous structures examples infinite permutation groups whose orbits distinct elements determined isomorphism types substructures see proposition consequence theorem bimorphism monoids structures provide examples infinite permutation monoids whose orbits defined way motivated aim find way constructing mbhomogeneous structures inspiration task theorem finding homogeneous structures see building similar result cameron structures rest section devoted providing theorem constructing structures throughout section relational signature class finite following convention write mean domains refer reader background model theory two main properties class finite structures guarantee existence countable homogeneous structure age first joint embedding property jep property ensures construct countable structure age second amalgamation property ensures constructed structure homogeneous showing equivalent extension property building countable structure age still need jep ensure age want need different amalgamation property ensure rather standard homogeneity aiming extend bijective endomorphisms require back forth style argument ensure extended map indeed bijective due fact bimorphisms automorphisms general require two amalgamation conditions hence two extension conditions ensure mbhomogeneous begin section examining required extension properties follows proposition must property mep takes care forward extension mep age monomorphism exists monomorphism extending suitable back condition express statement key difference monomorphism bimorphism difference existence preimage substructure exists substructure note true arbitrary monomorphism required equal existence preimage every extended map condition wish ensure back extension property motivates next definition definition let two say injective map antimonomorphism implies relations remarks note map monomorphism antimonomorphism embedding easy exercise show function composition two antimonomorphisms antimonomorphism particular important note composition antimonomorphism embedding antimonomorphism state prove lemma stating preimage monomorphism antimonomorphism bijective function write unique bijection notice lemma let two suppose bijection monomorphism antimonomorphism proof suppose monomorphism holds since preserves relations follows holds converse direction suppose antimonomorphism relation preserves follows holds finally therefore monomorphism remarks restricting codomain monomorphism image see bijective homomorphism therefore antimonomorphism lemma similarly restricting codomain antimonomorphism image see obtain bijective homomorphism immediate corollary result bijective homomorphisms bijective antimonomorphism use antimonomorphisms express backward extension property bep bep age antimonomorphism exists antimonomorphism extending properties prove necessary sufficient proposition let countable structure age mbhomogeneous bep mep proof suppose also mep proposition let antimonomorphism age exist copies isomorphisms restrict codomain image find antimonomorphism monomorphism bijective antimonomorphism lemma extend bimorphism bijective homomorphism lemma define remains show extends indeed extends therefore bep conversely suppose bep mep let monomorphism finite substructures shall extend bimorphism stages using back forth argument set typical stage bijective monomorphism extending countable enumerate points even pick point smallest number dom substructure containing using mep extend monomorphism restricting codomain bijective homomorphism extending odd note bijective antimonomorphism lemma select point least number dom substructure using bep extend restricting codomain antimonomorphism obtain bijective antimonomorphism extending bijective homomorphism extending ensuring every appears odd even stage countably many applications procedure yield bimorphism extending remark note process extending one point time eventually creates bimorphism similar fashion dolinka section present equivalent conditions proposition using one point extensions suppose anti monomorphism exists anti monomorphism extending straightforward proof induction shows structure conditions also bep mep properties useful determining whether structure turn attention actual process construction also need ensure would make sense take forth amalgamation condition property introduced map let class finite structures maps monomorphism embedding exists monomorphisms embedding similar bep use antimonomorphisms set back amalgamation condition known property bap bap let class finite structures elements antimonomorphism embedding exists embedding antimonomorphism see figure enough prove first propositions detail construction proposition let structure age map bap proof necessarily proposition age map suppose structures age assume antimonomorphism embedding without loss generality assume inclusion map figure property bap lemma induces bijective monomorphism mbhomogeneous extend restricting yields bijective monomorphism let structure induced follows surjective define structure set inclusion map bijective homomorphism antimonomorphism lemma hence define antimonomorphism easy check age bap next result recall class finite structure jep exists proposition class finite relational structures closed isomorphisms substructures countably many isomorphism types jep map bap exists structure age proof build countably many stages assuming constructed stage note countably many isomorphism types enumerate mod select structure use jep find structure embeds define structure mod select triple monomorphism using map find structure embedding extends monomorphism mod select triple antimonomorphism using bap find structure embedding extended antimonomorphism arrange steps every structure appears mod stage every triple appears mod stage every every every monomorphism exists embedding extends monomorphism every triple appears mod stage every every every antimonomorphism exists embedding extends antimonomorphism following define remains show age construction ensures every appears mod stage follows age conversely closed substructures age equal suppose monomorphism arrangement steps exists construction exists monomorphism extends mep use similar argument show bep similar fashion proposition major consequence original theorem fact two limits age isomorphic guarantee two structures age constructed manner proposition isomorphic see section examples provide uniqueness condition method construction using weaker notion equivalence extend idea achieve goal let two countable structures signature say age age every embedding finite structure extends bijective monomorphism vice versa note equivalence relation structures signature two without isomorphic see example details show relevant equivalence relation proposition let two structures age age proof proposition suffices show mep bep certainly sense proposition result thus mep suppose age exists antimonomorphism note need isomorphic age age exists copy fix isomorphism two therefore isomorphism finite structure two extend bijective homomorphism turn induces bijective antimonomorphism lemma define antimonomorphism antimonomorphism since bep proposition extend antimonomorphism map antimonomorphism need show extends using facts extends extends therefore bep let bijective embedding finite structure utilise back forth argument constructing bijection countably many steps set stage bijective monomorphism extending countable enumerate points even select point dom smallest natural number dom element age assumption mep extended restricting codomain image monomorphism extending gives bijective monomorphism odd note bijective antimonomorphism select point dom least natural number therefore element age use restricting bep extend antimonomorphism codomain image delivers required bijective antimonomorphism extending taking inverse shows bijective homomorphism extending required repeating process countably many times ensuring domain image provides bijective homomorphism extending converse direction entirely analogous conclude section observing due proposition proposition structure determined age graphs section focus graphs detail article graph set vertices together set edges edge set interprets irreflexive symmetric binary relation say adjacent write say write graph define complement graph vertex set edges given recall complete graph vertices graph vertex set edges given complement graph called null graph vertices denoted say induced subgraph graph paper whenever say subgraph mean induced subgraph exception case spanning subgraph say spanning subgraph implies logical place start investigation demonstrating properties graphs proposition let graph complement also proof note age age since mep bep proposition suppose age antimonomorphism preserves may change edges monomorphism bep extended antimonomorphism turn induces monomorphism hence mep proof bep similar following guarantee certain subgraphs appear graph cameron proposition prove every graph must contain infinite complete subgraph expand proposition corollary infinite graph contains infinite complete infinite null subgraph proof graph necessarily hence contains infinite complete subgraph aforementioned result proposition also contains infinite complete subgraph result follows consequence argument graph neither locally finite graph finitely many edges involving complement locally finite graph fact say recall diameter graph greatest length shortest path two points next result restatement proposition proof follows every graph also graph corollary proposition suppose connected graph diameter every edge contained triangle examine cases graph disconnected avoid triviality shown disconnected graph disjoint union complete graphs size use result conjunction corollary see candidates disconnected graph must disjoint unions infinite complete graphs building observation disconnected graph disjoint union infinite complete graphs classify disconnected graphs next result proposition let index set finite size countably infinite proof cases note every age decomposed finite disjoint union finite complete graphs write complete graph finite size note age suppose age two choices either completely independent related exactly one former extend monomorphism monomorphism sending vertex latter extend monomorphism monomorphism sending vertex defined hence mep however note embed independent set corollary proof infinite assume age let antimonomorphism note finite finite since infinite always exist therefore regardless related extend antimonomorphism mapping stated hence proposition remark note proposition proposition complement complete multipartite graph infinitely many partitions infinite size also proof relied existence independent element every image finite antimonomorphism aim obtain sufficient conditions along lines order construct new examples definition let infinite graph say property every finite set exists adjacent every member say property every finite set exists every member see figure diagram property property figure diagram definition note graph property algebraically closed see due nature properties property complement algebraically closed properties prove sufficient proposition let infinite graph properties proof suppose age monomorphism finite finite set vertices property exists vertex adjacent every element potential image point map extending sending monomorphism mep using property similar fashion shows bep proposition remark converse result true proposition example graph property property complement example graph property property shows algebraically closed graph whose complement also algebraically closed present notion equivalence extends idea definition let two graphs say bimorphism equivalent exist bijective homomorphisms remark informally definition says start draw number possibly infinite extra edges onto get draw number possibly infinite extra edges onto get graph isomorphic weaker version introduced proposition every pair graphs bimorphism equivalent definition converse true specifically bimorphism equivalent graphs need age see corollary example note definition equivalent saying contain spanning subgraphs justifying name equivalence relation graphs isomorphism denote relation product two bijective homomorphisms induces bimorphism aut necessarily isomorphisms occurs singleton equivalence class show bimorphism equivalence preserves properties proposition let bimorphism equivalent graphs via bijective homomorphisms properties proof suppose property exists vertex adjacent every element homomorphism finite subset adjacent every element since bijective every finite subset written observations show property using fact bijective monomorphism lemma exists antimonomorphism since property finite exists vertex independent every element due fact preserves vertex independent every element bijective happens every finite set property converse direction symmetric fact say connection bimorphism equivalence properties proposition two graphs properties bimorphism equivalent proof assume two graphs properties use back forth argument construct bijective homomorphism bijective antimonomorphism lemma converse bijective homomorphism suppose function sending vertex vertex bijective homomorphism set assume extended bijective homomorphism finite furthermore countable enumerate vertices even select vertex smallest number property exists vertex adjacent every element define map sending extending map bijective homomorphism edge element preserved see figure diagram example figure even proof proposition odd choose vertex smallest number property exists vertex independent every element define map sending extending bijective homomorphism every edge preserved none see figure diagram example stage figure odd proof proposition repeating process infinitely many times ensuring vertex appears even stage vertex appears odd stage defines bijective homomorphism construct bijective antimonomorphism similar fashion replacing homomorphism antimonomorphism using property even steps property odd steps converse map bijective homomorphism bimorphism equivalent recall random graph countable universal homogeneous graph well known see characterised isomorphism following extension property two finite disjoint sets vertices exists adjacent every vertex vertex see figure figure random graph next result extends proposition establishes complementary condition graph case corollary corollary suppose countable graph properties bimorphism equivalent proof properties converse direction follows proposition forward direction follows proposition remarks three results together show equivalence class precisely set countable graphs properties particular exists see example bimorphism equivalent means draw infinitely many extra edges necessity see proposition get graph draw infinitely many edges get graph isomorphic constructing uncountably many examples aim subsection use properties graphs order construct uncountably many examples begin construct initial example graph homogeneous important wish describe many examples graphs also homogeneous countably many homogeneous graphs example let infinite binary sequence infinitely many define graph infinite vertex set edge relation pmax observe natural numbers natural ordering say graph determined binary sequence example given figure figure infinitely many term every finite subsequence aik exist natural numbers together manner construction ensures properties therefore check sequence satisfying conditions graph isomorphic homogeneous graph using classification countable homogeneous graphs lachlan woodrow edge neither complement since connected disjoint union complete graphs property complement disjoint union complete graphs corollary neither complete null contains infinite complete subgraph infinite null subgraph isomorphic graph complement finally note term simultaneously vertex adjacent hence satisfy extension property characteristic random graph accounts countable graphs classification homogeneous graph remarks let two infinite binary sequences infinitely many infinite binary sequence contains every finite binary sequence subsequence finite induced subgraphs induced edge relation true binary sequences conclude age shown two sequences see proposition throughout rest section binary sequences infinitely many guarantees graph determined properties many natural questions arise construction perhaps pertinent extent depend binary sequence answering question tell exactly many new graphs kind achieve solution investigating automorphism group invariant first lemma establishes convention zeroth place sequence denote neighbourhood set vertex lemma suppose graphs vertex sets edges determined binary sequences respectively assume following take binary sequence without loss generality adopt convention rest section binary sequence denote kth consecutive string respectively denote vertex sets corresponding subsequences see figure furthermore denote graph induced subset figure lemma let binary sequence let graph determined binary sequence suppose vertex proof assumption observation example definition edge infinitely many set infinite finite take pmax two vertices adjacent lemma let binary sequence suppose vertices vkn corresponding pkn proof define sets due construction example easy see using similar argument proof lemma infinite complete graph every element adjacent every element define map fixing pointwise sending fashion isomorphism next proposition recall exists aut graphs induced isomorphic furthermore recall independent set graph maximum independent set exist subset independent set proposition let binary sequence aut aut aut infinite direct product automorphism groups proof using lemma set convention throughout hence either depending whether starts size least write pin qim write vin wim first show automorphism fixes setwise using series claims prove bijective map fixing every point except single acting automorphism automorphism begin first claim claim proof claim without loss generality suppose define following sets lemma asserts reasoning outlined lemma note also finite sets maximum independent set hence must finite consider sets contained respectively pmax follows pmax hence maximum independent set using similar argument show maximum independent set since exists hence therefore different sizes maximum independent sets isomorphic concludes proof claim shows automorphism sending vertex vertex neighbourhoods move next claim claim suppose automorphism sending proof claim define set previous claim exists hence least one however lemma possibly automorphism sending proves second claim claims see automorphism fixes setwise sandwiched vertices adjacent every vertex adjacent every vertex fixed setwise conclude fixed setwise claim fixed setwise automorphism proof claim two cases consider either begins case suppose contradiction exists aut sending due case edge thus independent everything edge preserved fixed setwise use similar argument show fixed setwise case assumption contains hence graph contains nonedge therefore lemma claim fixed setwise happens fixed setwise concludes proof claim fixed setwise automorphism remains prove automorphism whilst fixing every point automorphism complete graph null graph follows aut sym aut sym suppose bijective map acts automorphism fixes lemma two elements extended neighbourhood follows preserves edges automorphism proof bijective map acts automorphism fixes every point similar proposition guarantees existence countably many graphs instance define binary sequence following sequence many followed alternating follows aut infinite direct product one copy sym together infinitely many trivial groups aut sym true aut aut furthermore straightforward see two monoids nonisomorphic groups units respectively hence constructed several examples oligomorphic permutation monoids structures proposition means oligomorphic permutation monoids strong orbits aim use basic framework established construct nonisomorphic examples graphs achieve idea add pairwise finite structures age ensure uniqueness isomorphism end let strictly increasing sequence natural numbers use recursively define binary sequence follows base followed many inductive assuming nth stage sequence constructed add followed many right hand side sequence instance using binary sequence construct fashion example infinitely many graph determined mbhomogeneous eventual plan induce finite graphs onto independent sets induced strings consecutive selecting suitable countable family pairwise graphs appear age ensure graphs different ages prove two lemmas form basis construction examples graphs second lemma folklore proof omitted lemma let binary sequence embed cycle graph size proof let vim let graph edges induced suppose vertex cycle graph degree see pij means pim degree contradiction lemma let two cycle graphs embeds vice versa case isomorphic hence countable family pairwise graphs age suppose strictly increasing sequence natural numbers construct usual fashion size independent set draw vertices thus creating new graph see figure figure corresponding sequence added cycles highlighted red particular note even additional structures still mbhomogeneous properties since added many structures age must careful added extra cycles sizes expressed sequence next proposition alleviates concern proposition suppose strictly increasing sequence natural numbers suppose natural number graph outlined contain proof suppose edge set combination edges graph induced finite subsequence qim vim edges cycles added onto aim show cycles size precisely added construction assume contains elements split consideration cases case assume adjacent creating contradiction embed therefore adjacent vij since qij follows edge vij induced added cycle implies vij therefore bij sequence consecutive natural numbers contradiction hence case since happens adjacent vertex vij qij vij vertex vik qik construction ensures edges drawn case contradiction vij qij must case qij case argument preceding case contradiction similar argument case holds element hence made however still possibility originate different show however every vertex comes single element conclude two vertices edge contained edge connected finally embedding forced conclude lemma done corollary suppose two different strictly increasing sequences natural numbers proof different sequences exists without loss generality assume hence embeds proposition embed hence age age isomorphic results prove following theorem many graphs bimorphism equivalent random graph proof strictly increasing sequences natural numbers continuum many examples corollary graphs different ages means constructed many mbhomogeneous graphs furthermore examples property bimorphism equivalent corollary remark consequence means pairwise graphs property stated remark corollary finally section utilise technique overlaying finite graphs order prove second main theorem section recall graph every degree theorem finite group arises automorphism group mbhomogeneous graph proof version frucht theorem exists countably many graphs aut finitely many graphs size less equal exists graph size aut handshake lemma see graph must total edges total possible edges means must induce least define binary sequence following sequence many followed alternating using notation established lemma follows construct illustrated example draw edges obtain graph see figure figure highlighted red similar fashion proposition aim show fixed setwise automorphism series claims claim proof claim assume without loss generality define following sets lemma applies situation proof proposition maximum independent set contained respectively graph six vertices exists maximal independent set size greater equal consider sets maximum independent sets respectively exists hence since maximum independent sets respectively different sizes conclude ends proof claim shows exists automorphism sending claim exists automorphism sending proof claim split proof two cases latter lemma complete graph contains case remains show automorphism sending case union infinite complete graph every vertex connected every vertex means must induced however contains contains induced subgraph reasoning least therefore means done fixed setwise also fixed pointwise proof proposition sandwiched vertices adjacent every vertex adjacent every vertex fixed setwise deduce fixed setwise hence pointwise conclude fixed setwise automorphisms finally show bijective map acting automorphism fixing everything else automorphism every connected every independent map preserves edges automorphism using together theorem follows finite group exists oligomorphic permutation monoid group units worked examples section devoted determining examples homogeneous structures investigation concludes complete classification countable homogeneous graphs also variety ways demonstrate structure firstly homogeneous structure finite partial monomorphism also finite partial isomorphism extend automorphism graph suffices show properties proposition finally recall remarks proposition prove suffices show techniques used points demonstrating following positive examples example let complete graph countably many vertices graph homogeneous suppose monomorphism two finite substructures preserve must preserve finite partial isomorphism using homogeneity extend automorphism hence bimorphism complement infinite null graph also proposition example tournament defined oriented loopless complete graph similar argument complete graph example every finite partial monomorphism tournament finite partial isomorphism three homogeneous tournaments random tournament local order see cherlin example let countable universal homogeneous undirected graph also known random graph follows extension property characteristic random graph see figure properties see definition graph proposition example let limit class directed graphs without loops otherwise known generic digraph well known satisfies following extension property see dep finite pairwise disjoint sets vertices exists vertex arc every element arc every element independent every vertex see figure diagram example using show demonstrating turn suppose age monomorphism decompose three disjoint sets fact injective means pairwise disjoint subsets using dep sets arc select vertex arc elements elements independent elements define map due choice monomorphism suppose age antimonomorphism fact finite implies finite exists vertex independent elements dep figure example directed extension property define function thanks choice antimonomorphism preserved hence proposition remark shown class loopless digraphs class hence exists unique homogeneous generic digraph slightly different extension property see use similar proof show continue investigation focusing class structures known cores structure core every endomorphism embedding every categorical structure core homomorphically equivalent core cores play important role theory constraint satisfaction problems see bodirsky habilitation thesis introduction topic widely studied examples cores include countable dense linear order without endpoints complete graph countably many vertices complement proving structure core useful way show structure mbhomogeneous next result shows lemma let core exists finite partial monomorphism isomorphism hence proof let finite partial monomorphism core isomorphism endomorphism embedding extend use result detail homogeneous structures mbhomogeneous example let countable homogeneous graph results mudrinski shown core finite partial monomorphisms isomorphisms send edge instance lemma example oriented graph analogues homogeneous graphs henson digraphs limits class digraphs embedding elements set finite tournaments show core containing tournaments three vertices suppose contradiction exists end independent pair vertices select tournament least number vertices choose create oriented graph removing arc adding extra vertex drawing arc see figure example figure construction example note tournament vertices age homogeneity find copy place respectively vertex place image oriented graph vertices preserves arcs involving follows contradiction belong age must injective assume exists independent pair vertices select fashion choose two vertices remove arc obtain oriented graph embeds via homogeneity find isomorphic copy place respectively hence image induces copy contradiction belong age core set tournaments therefore lemma example let myopic local order defined follows distribute many points densely around unit circle every point points arg arg arg draw arc arg note means embeds directed show structure core assume endomorphism independent pair points occurs arg arg arg arg follows arg exists point arg arg endomorphism creates directed loop contradiction suppose pair endomorphism find point endomorphism must preserve relations hence directed obviously false core applying lemma implies finally note proposition statement enough prove structure mep bep show example let generic digraph homogeneous digraph show bep hence mbhomogeneous proposition let tournament vertices let disjoint union vertex note age let antimonomorphism sending independent set vertices substructure exists construction antimonomorphisms preserve potential image point must independent happen would induce independent bep hence proposition example let complement homogeneous graph note contains spanning subgraph algebraically closed satisfies mep however embed independent contain infinite null graph complete follows corollary following final example able present complete classification homogeneous graphs also theorem countably infinite homogeneous graphs following complement complement random graph proof check every item classification countably infinite undirected homogeneous graphs given showed graphs list mbhomogeneous example proposition example disconnected homogeneous graph must countable union complete graphs finite union infinite complete graphs proposition countable homogeneous graphs graphs complements example example end open questions concerning graphs general seen would arduous task classify graphs isomorphism idea bimorphism equivalence represents best hope identify graphs positive answer following question would constitute best classification result possible graphs given amount range examples question every countable graph bimorphism equivalent one five graphs theorem related question following question countably many graphs bimorphism equivalence finally notice every example graph article also motivates final question question countably infinite graph hehomogeneous conversely graph references agarwal reducts generic digraph annals pure applied logic bhattacharjee macpherson neumann notes infinite permutation groups number lecture notes mathematics springer bodirsky core countably categorical structure stacs pages springer bodirsky complexity classification constraint satisfaction arxiv preprint bodirsky constraint satisfaction countable homogeneous templates journal logic computation bodirsky pinsker schaefer theorem graphs journal acm jacm cameron oligomorphic permutation groups volume cambridge university press cameron random graph mathematics paul pages springer cameron relational structures combinatorics probability computing cherlin classification countable homogeneous directed graphs countable homogeneous volume american mathematical diestel graph theory dolinka bergman property endomorphism monoids limits forum mathematicum volume pages dolinka gray mcphee mitchell quick automorphism groups countable algebraically closed graphs endomorphisms random graph math proc cambridge philos sur certaines relations qui ordre des nombres rationnels comptes rendus des sciences paris frucht graphs degree three given abstract group canadian math hartman lcolored graphs european hodges model theory volume cambridge university press kechris classical descriptive set theory volume springer science business media lachlan woodrow countable ultrahomogeneous undirected graphs transactions american mathematical society pages lockett truss generic endomorphisms homogeneous structures groups model theory contemporary mathematics page lockett truss notions discrete mathematics macpherson survey homogeneous structures discrete mathematics partially ordered sets order classes finite geometries combinatorica pech oligomorphic transformation monoids structures fundamenta mathematicae mcphee endomorphisms limits automorphism groups algebraically closed relational structures phd thesis university andrews meakin groups semigroups connections contrasts london mathematical society lecture note series mudrinski notes endomorphisms henson graphs complements ars combinatoria munkres topology volume prentice hall upper saddle river reyes local definability theory annals mathematical logic rusinov schweitzer graphs journal graph theory schmerl countable homogeneous partially ordered sets algebra universalis steinberg theory transformation monoids combinatorics representation theory electronic journal combinatorics
4